Developing an asset/node based CMS [closed] - content-management-system

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'd like to develop a CMS for fun/personal using asset-based architecture rather than page-based (why, is the purpose of this question), but I can't find much information on the subject. All I've found barely scrapes the surface (there's a good chance I'm searching with the wrong terms).
An asset-based CMS stores information
as blocks of text called assets. These
individual assets are then related to
each other to automatically build
pages.
What are the (dis/)advantages of such a system?
What are the primary principles of asset-based architecture?
What should and shouldn't be an 'asset'? Where can I read more?

Decided to try to answer this after leaving my comment :)
If your definition of "asset" is along the lines of a "node" (such as in Drupal), or a document (such as the JSON-style documents in MongoDB or CouchDB), then here is some info:
I'll use the term "node" for this post. I think it's closest to "asset" and more popularly used. This also might be a very abstract answer, but hopefully it will at least get you thinking and pointed in the right direction.
Node-based architecture, could be described as a cross between neural networking patterns and object-oriented programming. The key is that "nodes" are points of data, and nodes can be connected to each other in some way.
Some architectures will treat nodes much like object-oriented classes, where you have different classes of nodes that can inherit various characteristics of parent nodes - every type of node inherits the basic properties of its parent - an "Essay" node might inherit the properties of a "Text-Document" node, which in turn inherits the properties of the base node. Drupal implements this inheritance model well, although it does not emphasize the connections between nodes in the way that something like Facebook's GraphAPI/Open Graph Protocol does.
This pattern of node-based architecture can be implemented at any level too, and exists in nature - think of social circles within society or ecosystems ;) On a software engineering level, it can take the form of a database, such as how MongoDB simply has nodes of data (which are called documents in that case). These documents can reference other documents, although, like Drupal, Mongo does not emphasize connectedness. Ironically, relational databases like MySQL that are the opposite of document-based databases actually emphasize connectedness more, but that's a discussion for another day. Facebook's GraphAPI that I mentioned above is implemented on a Web-API level. The Open Graph Protocol shapes it. And again, something like Drupal is implemented at the front-end level (although its back-end implements the node pattern on a lower level, of course).
Lastly, node-based architecture is much more flexible than traditional document/page based CMS architecture, but that also means there is a lot more programming and configuring to be done on the side of the developers. A node-based system will end up being far more inter-connected and its components will be integrated with one-another a deeper level, but it can also be more susceptible to breaking because of this deep level of connection - it is less than separated into individual modules. Personally, I see a huge trend where people are moving to become more "node-based" and less "content-based" as people begin to interact with websites more like applications than as electronic magazines as they did in the 90's. Plus, the node-pattern fits well with the increasing emphasis on user-contribution and social browsing because adding people and their accounts/profiles to a web site dramatically increases the complexity.
I know you said "asset," so I'll also say that asset emphasizes the data side of the node pattern more, whereas "node" emphasizes the connections between the pieces of data more.
But for further reading, I'd recommend reading up on the architecture of the software I mentioned. You could also check out node.js, JSON, and document-based databases, and GraphAPI's as they seem to fit well with this idea of asset/node-based architecture. I'm sure Wikipedia has some good stuff on these patterns as well.

You could very quickly scale this up using the CakePHP framework. It uses an MVC pattern and it provides classes called elements that may be inserted into layouts and can load whatever content you want based on the page, user, moon phase, etc.
<page>
<element calls methodX>
<element calls methodY>
<Default Content relies on Controller Action(view/edit/add/custom)>
<element calls methodZ>
</page>

I think you might be describing a CMS backed up by a content repository.
The repository itself is implemented by Apache Jackrabbit based on JSR 170:
The API should be a standard, implementation independent, way to access content bi-directionally on a granular level within a content repository. A Content Repository is a high-level information management system that is a superset of traditional data repositories. A content repository implements "content services" such as: author based versioning, full textual searching, fine grained access control, content categorization and content event monitoring. It is these "content services" that differentiate a Content Repository from a Data Repository.
For a CMS working on top a content repository, look at Nuxeo.

Related

Why shared libraries between microservices are bad? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 6 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Sam Newman states in his book Building Microservices
The evils of too much coupling between services are far worse than the problems caused by code duplication
I just don't understand how the shared code between the services is evil. Does the author mean the service boundaries themselves are poorly designed if a need for a shared library emerges, or does he really mean I should duplicate the code in the case of common business logic dependency? I don't see what that solves.
Let's say I have a shared library of entities common to two services. The common domain objects for two services may smell, but another service is the GUI to tweak the state of those entities, another is an interface for other services to poll the state for their purpose. Same domain, different function.
Now, if the shared knowledge changes, I would have to rebuild and deploy both services regardless of the common code being an external dependency or duplicated across the services. Generally, same concerns all the cases for two services depending of the same article of the business logic. In this case, I see only harm of duplication of the code, reducing the cohesion of the system.
Of course, diverging from the shared knowledge may cause headaches in the case of shared library, but even this could be solved with inheritance, composition and clever use of abstractions.
So, what does Sam mean by saying code duplication is better than too much coupling via shared libraries?
The evils of too much coupling between services are far worse than the problems caused by code duplication
The author is very unspecific when he uses the generic word "coupling". I would agree with certain types of coupling being a strict no-no (like sharing databases or using internal interfaces). However the use of common libraries is not one of those. For example if you develop two micro services using golang you already have a shared dependency (towards golang's basic libraries). The same applies to libraries that you develop yourself for sharing purpose. Just pay attention to the following points:
Treat libraries that are shared like you would dependencies to 3rd party entities.
Make sure each component / library / service has a distinct business purpose.
Version them correctly and leave the decision which version of the library to use to the corresponding micro service teams.
Set up responsibilities for development and testing of shared libraries separately from the micro services teams.
Don't forget - The microservices architectural style is not so much focusing on code organization or internal design patterns, but on the larger organizational and process relevant aspects to allow scaling application architectures, organizations and deployments. See this answer for an overview.
Short
The core concept of the microservice architecture is that microservices have their independent development-release cycles. "Shared libraries" undermining this.
Longer
From my own experience, it's very important to keep microservices isolated and independent as much as possible. Isolation is basically about being able to release & deploy the service independently of any other services most of the time.
In other words its something like:
you build a new version of a service
you release it (after tests)
you deploy it into production
you have not caused the deployment cascade of your whole environment.
"Shared libraries" in my definition those libraries, do hinder you to do so.
It's "funny" how "Shared Libraries" poison your architecture:
Oh we have a User object! Let's reuse it everywhere!
This leads to a "shared library" for the whole enterprise and starts to undermine Bounded Contexts (DDD), forces you to dependent on one technology
we already have this shared library with TDOs you need, written in
java...
Repeating myself. The new version of this kind of shared libs will affect all services and complicate your deployments up to very fragile setups. The consequence is at some point, that nobody trusts himself to develop the next releases of the common shared library or everyone fears the big-bang releases.
All of this just for the sake of "Don't repeat yourself"? - This is not worth it (My experience proves it). T
The shared compromised "User" object is very seldom better than several focused User objects in the particular Microservices in the praxis.
However, there is never a silver bullet and Sam gives us only a guideline and advice (a heuristic if you like) based on his projects.
My take
I can give you my experience. Don't start a microservice project with reasoning about shared libraries. Just don't do them in the beginning and accept some code repetition between services. Invest time in DDD and the quality of your Domain Objects and Service Boundaries. Learn on the way what are stable parts and what evolves fast.
Once you or your team gained enough insides you can refactor some parts to libraries. Such refactoring is usually very cheap in comparison to the reverse approach.
And these libraries should probably cover some boilerplate code and be focussed on one task - have several of them, not one common-lib-for- everything In the comment above Oswin Noetzelmann gave some advice on how to proceed. Taking his approach to the maximum would lead to good and focused libraries and not toxic "shared libraries"
Good example of tight coupling where duplication would be acceptable can be shared library defining interface/DTOs between services. In particular using the same classes/structs to serialize/deserialize data.
Let's say you have two services - A and B - they both may accept slightly different but overall almost same looking JSON input.
It would be tempting to make one DTO describing common keys, also including the very few ones used by service A and service B as a shared library.
For some time system works fine. Both services add shared library as dependency, build and run properly.
With time, though, service A requires some additional data that would change the structure of JSON where is was the same before. As a result you can't use the same classes/structs to deserialize the JSON for both services at the same time - the change is needed for service A, but then service B won't be able to deserialize the data.
You must change shared library, add new feature to service A and rebuild it, then rebuild service B to adjust it to new version of shared library even though no logic has been changed there.
Now, would you have the DTOs defined separately, internally, for both services from the very beginning, later on, their contracts could evolve separately and safely in any direction you could imagine. Sure, at first it might have looked smelly to keep almost the same DTOs in both services but on the long run it gives you a freedom of change.
At the end of the day, (micro)services don't differ that much from monolith. Separation of concerns and isolation are critical. Some dependencies can't be avoided (language, framework, etc.) but before you introduce any additional dependency by yourself think twice about future implications.
I'd rather follow given advice - duplicate DTOs and avoid shared code unless you can't avoid it. It has bitten me in the past. Above scenario is trivial one, but it may be much more nuanced and affect much more services. Unfortunately it hits you only after some time, so the impact may be big.
There are no absolute answer with this. You'll always find an example for a reasonable exception to the rule. We should take this as 'guidelines'.
With that being said, yes coupling between services is something to avoid and a shared library is a warning alarm for coupling.
As other answers have explained, microservices lifecycles should be independant.
And as for your example, I think it strongly depends on what kind of logic / responsibilities does the library have.
If it is business logic, something is odd. Maybe you need to split the library in different libraries with different responsibilities, if that responsability is unique and can't be splited, you should wonder if those two services should be only one. And if that library has business logic that feels weird on those two services, most likely that library should be a service in his own right.
Each microservice is autonomous so executables will have its own copy of shared libraries so there is no coupling with shared library?
Spring Boot, packages language run time also in package of microservice
Nothing is shared even runtime so I don't see problem in using library or common package in microservice
If shared library creates coupling in Microservice then using same languages in different Microservice also a problem?
I was also confused while reading "Building Microservices" by Sam Newman

Any CSLA 4 downloadable sample applications? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am looking for a full web (MVC or WebForm sample application which is based on CSLA 4.0. Any ideas? I think its ProjectTracker sample is WinForm only and based on older vesion of CSLA.
Mark's experience with CSLA seems to be quite outdated. Nearly every point he made is inaccurate. CSLA is for user's use-case scenarios. Especially data-binding to UI's.
1) Using the folder analogy is completely inappropriate. You can have a single business object act as both a parent and child if you so choose, just not the same instance of your business object. Lazy loading of children is completely supported as well.
2) The serialization overhead is no more than what RIA services does, as CSLA uses the DataContractSerializer to utlimately serialize objects. Additionally MobileFormatter has been updated to allow for custom serializers. Now binary is supported as well as the original xml. Ultimately it all still goes through the DataConstractSerializer.
3) You can create any kind of DataPortal replacement, including using JSON within your own custom DataPortal. And CSLA command objects support managed properties, so serialization works exactly the same way as business objects.
4) It's true there is no in-place merge, however, I've never found this to be a problem.
5) Subscribers never get serialized with the business object. If your DataPortal is only local, then the original object is sent(not serialized) and so any subscribers it has will naturally still be attached.
I have no problem leveraging CSLA in both Windows Form and Silverlight environments. For 95% of the business user use-cases CSLA brings a lot to the table.
http://www.lhotka.net/cslacvs/viewvc.cgi/core/trunk/Samples/NET/cs/ProjectTracker/Mvc3UI/ is the MVC3 Part of the famous CSLA ProjectTracker sample. This might be the
one to learn from.
Rocky himself checked a change in just 2 days ago, so this is probably as Cutting edge
as you can get for an CSLA sample, from the author himself.
Here are instructions on pulling code from svn
http://www.lhotka.net/cslanet/Repository.aspx
My advice - do not use CSLA. I am going to quote my reply to https://stackoverflow.com/questions/1234/have-you-attended-the-csla-master-class:
I have a two years experience with CSLA. In fact, when I started our
project I really did not want to write an entity framework from
scratch, something that was done in all of my previous jobs.
So, I picked CSLA. As any entity framework, it has good and bad
points. I will list a few of the bad ones, because the good ones are
described in abundance on the CSLA related sites. So, the nays:
CSLA parent-child relationship does not support folder-file pattern, where files are children of the parent folder, but they are
also independent entities. In CSLA, children are integral part of the
parent, so you cannot, for instance, update/delete/add a single child
without updating the whole object tree. Forget about lazy loading of
children - no such thing. In short, if your data model represents a
folder-file like structure - do not use CSLA. We had to twist CSLA
arms to let it support this mode.
Huge overhead in terms of state. Define a business object with 3 properties. Now send it over wire using some http binding. Pay
attention to what gets transmitted. I know XML is not the best
serialization vehicle, but your 3 properties are translated to ~4KB of
XML. What does it include? Business rules and field data manager state
among others. Extremely bloated. We employ zip compression, but still
this is very disturbing.
Silverlight does not have normal serialization engine, so CSLA comes with a Mobile serialization, which is good if there is nothing
else. The thing is that there are other things - JSON and protocol
buffers, but CSLA is incompatible with these techniques. And Mobile
serialization, although it solves the problem, it is a real pain when
it comes to commands, because there you have to implement it manually
(unlike business objects, which support it automatically for each
managed property). Remember CArchive from MFC of 10 years ago? This is
it.
Saving an object does not merge the new state in-place, rather returns a new object. We had much problems in Silverlight with the
fact that every save replaces the object tree. So, we had to override
the CSLA default behavior and implement in-place save with all the
associated complexity of merging new state with the old one.
You quickly loose control over what is actually transmitted on the wire. For example, here is something I have discovered while examining
the CSLA source code. Serializing a business object also serializes
all the serializable subscribers to its PropertyChanged and
PropertyChanging events. So, when such an object is sent to the
server, it carries along with it all the serializable subscribers to
these events. From the mobile object philosophy this is fine - mobile
object simply preserves its living environment across the application
tiers. From the practical point of view I find this a disaster waiting
to happen. Needless to say that I have disabled this feature right on
the spot.
Looking back after 2 years working with CSLA I have came to a conclusion that many others already came to before - your server side
objects just not the same as your client side. Trying to pretend they
are yields a lot of grief later in the development. And this is
probably the most important nay to CSLA. The concept of mobile objects
seems right at first, but as the project grows and the server and
client sides develop having the same object type on the server and
client becomes more of a liability rather than advantage - the
internet is full of discussions on the matter.
Bottom line - I would not have used CSLA if I had the same
understanding as I do now back then when I have started the project.
CSLA gives you much stuff out of the box and I like DataPortal concept
very much, but I see that I could have done fine without them and be
in a better place now.
These are my 2 cents.

Software Requirement Specifications for Web Applications [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm looking for some guidance/books to read when it comes to creating a software requirement specification for a web application. For inspiration I have read some spec documents for desktop based applications. The documents I have read capture a systems functional requirements in use cases which tend to be rather data oriented with use cases centered around the various CRUD operations the application is intended to perform.
I like this structure however I'm finding it rather difficult to marry it to what my web application needs to do, mostly reading data as opposed to manipulating it. I've had a go at writing some use cases however they all tend to boil down to "Search for item", "Change view of search results" or "User selects facet to refine search results". This doesn't sound quite right to me and makes me wonder if I'm going about this the right way.
Are there planning differences between web based and desktop based applications?
In my experience, there is really nothing wrong as having all the specifications being CRUD. Most of the time, any application isn't just "a simple CRUD app." Requirements evolve and different parts of the systems tend to diverge and acquire some specific logic.
Even if it feels like repeating the same CRUD sentences over and over, actually writing them down and thinking about it (instead of copy & pasting) will often uncover hidden requirements.
The differences between desktop based applications and web based applications is staggering.
I recommend reading these in exactly this order and apply this knowledge in exactly the opposite order, aside from CSS 3, HTML 5, and XHTML 1.1:
RFC 3986 - URI
RFC 2616 - HTTP 1.1
RFC 4346 - TLS 1.1
RFC 4251 - SSH Protocol
RFC 4252 - SSH Authentication
RFC 4253 - SSH Transport
RFC 2045 - MIME
RFC 4627 - JSON
HTML 4.01
XML
XHTML 1.0
XHTML 1.1
ECMAScript
CSS 2
HTML 5 (Not a standard)
CSS 3 (Not a standard)
Web Content Accessibility Guidelines 2.0
Symantec Internet Security Threat Report Volume XIV
Symantec Internet Security Threat Report Volume XV
OWASP Top 10
SEO
Once you have finished reading this you should begin to understand how the basic technology of the web works. Only at this point would you be ready to develop, conformantly, for a web application. There are many other technologies at play, but these are the basics and once you are familiar with the basics you will know where else to look for more information.
Basically you can sue the same method as for desktop applications, although you might make some addition, because we applications often tend to have different type of requrements. First of all, read something good about Use Cases, there are different use case levels and that might be a solution to your use cases which do not seem so right. Also do not forget about use case generalization and parametrized use cases if CRUD repetition is the problem. One thing, which is often more important in web applications than in desktop apps is the aspect of usability. This is because of the nature of the web - people have ofthe the coice of not using your service and go to next google result if you app is not usable. So what I think is a good addition to the spec are Personas - just find some possible instances of the human actors for your use cases and try to think of some goals they might want to achieve often using your web app and present how they will achieve them using your web app (and try to make it super easy of course). Another important thing is the Information Architecture - the way in which you will provide information in your web app. This comprises of navigation, some basic layout, but not necessarily design, just information about where to find something in your web app. This can be done using some rapid prototyping tools.

Application / MVC Event Model

Update: This question was inspired by my larger quest for mapping ontologically the whole software systems architecture enchilada. I've written a blog post about it, and hopefully it will help clarify what I'm after.
Many, many, many frameworks and stacks that's event-driven have too much variation for my little head to get around. Is there somewhere some resources that defines the outline of a reasonable Application Event Model, what events there are, and what triggers are most common?
I've got my own framework with a plugin and event-driven architecture, but I want to open-source it, and as such would like to make it closer to some common ground as not to alienate people.
So to clarify; this is for an application, meaning setting up the environment, the dependencies, the data sources (like databases), and being a MVC framework setting up the model, the view, launching controllers / actions, and in the GUI various stages of the interface (header, content, columns, etc.).
Ideas? Thoughts? Pointers? (And I've made it language and platform neutral at this point)
I read your blog entry, which btw I found an extremely interesting read, but... this question does not seem to reflect the broadness of the issue you are presenting there.
What you are after is very abstract and theoretical. What I mean to say is that if you tie any of those ideas to actual technology you will find yourself 'stuck' with it. This is why many of us are reluctant to use any framework. Especially the 'relabeled' products suddenly claiming to conform to the trend. We choose mainly on the basis of what appears to be needed to reach a predetermined result.
Frameworks (or tools in general) that target the application architecture domain distinguish themselves primarily by the amount of responsibility they are designed to take on. Spring for example only deals with the concept of decoupling and is therefore easily adopted and useable in many situations. The quality of any framework is expressed in terms of how well the designers of such frameworks were able to keep their products within the boundaries of that responsibility. Some front-to-end products will do exactly the opposite, code generators being among the 'worst' of them.
To answer your question at the top of this page, I do not think there is a framework that does what you want at this time and I do not think there is a single model of how applications (should) work. Keep in mind though that the application architecture domain deals with technology more than it does with concepts. In other words: If it works and meets the requirements, then you're pretty much done.
That said, you might find something of value in agent-based systems.
Heh. Most developers pick the major framework they like the tools for and stick with it. That's usually the winning strategy. I sympathize with your desire not to marry a single vendor.
Keep in mind however, that in developing your own framework, you're going to end up tied to a single vendor anyway. :-)
Is there somewhere some resources that defines the outline of a reasonable
Application Event Model, what events there are, and what triggers are most common?
I don't think so.
From what I see, there are two kinds of models out there: those with a real framework with which you can make a working data entry dialog, and abstract meta-meta-models that are optimized for modeling themselves.
Try surveying a few current frameworks that have good documentation online and cross-reference the major terminology in a spreadsheet. It's an interesting exercise.
I'd have a look at Spring for Java, and the XT Framework Spring module (http://springmodules.dev.java.net/docs/reference/0.9/html/xt.html), which apparently supports event-driven architecture, as starting points. Spring has an MVC framework (inc. convention-based routing to controllers), db configuration (for Hibernate, particularly), plus full dependency injection support. There's also a mechanism in Spring for modularising your web apps, called Spring Slices. And it can be integrated with Jersey for building RESTful apps.
(Unfortunately, I tried to provide links to everything, but this place only lets new users post a single link. So you'll have to do some googling :) )

Web framework programming mindset [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am just starting to play with Django/Python and am trying to shift into the MTV mode of programming that Django asks for (insists on). Deciding on what functions should be methods of a model vs simple being a function in a view has so far been confusing. Does anyone know of a book, website, blog, slideshow, whatever that discusses Web Framework programming in more general, abstract terms? I imagine just a book on object oriented programming would do it, but I feel like that would be overkill - I was looking for something web framework specific.
My basic rule in Django is: if you could conceivably need the functionality from somewhere other than the view itself, it doesn't belong in the view function.
I'd also recommend downloading some of the plethora of apps on Django Pluggables and seeing how they do it.
Once you do find some good guide, here's something to remember: Django is a bit special with its terminology. It uses "MTV" for Model, Template and View (and can mention also a URL Dispatcher somewhere along the way), whereas a more standard set of terms is "MVC" for Model, View and Controller.
Model is the same in both meanings - a model of a data entity, often linked to a database table, if the framework implements Object/Relational Mapping (which Django does).
But the two remaining terms might be confusing; where Django talks about Views, the 'rest of the world' talks about Controllers. The basic idea is that this is where the presentation logic is done. Calculations are calculated, arrays are sorted, data is retrieved, etc. I'd say that Django's URL dispatcher is also a part of the conventional Controller concept.
Django's Templates are comparable to Views elsewhere - here you have your presentation, nothing else. Where Django forces you to a very small set of logical commands, other frameworks often just recommend you not to do anything than present HTML, with some presentation logical elements (like loops, branches, etc), but don't stop you from doing other stuff.
So, to recap:
Model: Data objects
Controller (View in Django): Data process
View (Template in Django): Presentation
Oh, btw: For a Django-specific guide, consider reading The Django Book
I've not really used Django in anger before, but in Rails and CakePHP (and by extension, any MVC web-framework) the Fat Model, Skinny Controller approach to organising your methods has been a real eye-opener for me.
If you aren't absolutely set on diving into Django and don't mind trying something else as a start, you might want to give WSGI a shot, which allows you to template your application your own way using a third party engine, rather than having to go exactly by Django's rules. This also allows you to peek at a lower level of handling requests, so you get a bit better understanding of what Django is doing under the hood.
Here are a few links that might be helpful as an overview.
From my own experience, when I first started using MVC based web-frameworks the biggest issue I had was with the Models. Prying SQL out of my fingers and making me use Objects just felt strange. Once I started thinking of my data as Objects instead of SELECT statements it started getting easier.
MVC In laymen's terms
MVC: The Most Vexing Conundrum
How to use Model-View-Controller
View function should only contain display helpers or display logic. View functions should never access the model itself, but should take parameters of model data. It is important to separate the model from the view. So if the function handles accessing the database or database objects, it belongs in the model. If the function handles formatting display, it belongs in the view.