Attempting to create a feed with Zend_Feed_Writer but need to include images.
I checked and such support is offered by Media RSS (Yahoo) and their namespace: http://search.yahoo.com/mrss.
Unfortunately this is not supported by ZendFramework and I am wondering what is the best approach to create such a feed through ZF.
I believe it can be addressed via Extensions but the documentation is poor. Anyone had this need as well?
You can simply create your own Feed_Writer class that extends Zend_Feed_Writer and add methods to support the elements from the Media RSS specifications.
From your tags on your question I'm guessing you're using ZF2, right? Couldn't find an example
relating to that version, but here's a good example for creating Custom Feed and Entry Classes on ZF1. It shouldn't be too hard to understand the concept and translate it to ZF2.
Hope this helps, good luck.
Related
Hopefully someone can help me, I'm new to EPiServer and have been given a data migration task. We are using the latest version 8.5. I need to migrate content from a clients home grown CMS (that luckily is in a tree like structure) to EPiServer. There doesn't seem to be a whole lot of information about this on the web - perhaps I just don't know the right thing to search for.
It looks like using the EPiServer.ServiceApi might be the route to go but again locating useful documentation is proving difficult.
I was thinking of setting up the client CMS in SQL Server and writing a simple console application to call the EPiServer.ServiceApi inserting the content. If anyone has any information on this or better still and example i would be very grateful.
Thanks,
Dan
If you are just importing content from another CMS I would write a scheduled job in EPiServer:
http://world.episerver.com/code/dannymurphy/Stoppable-Scheduled-Job-with-feedback/
That job then uses the standard IContentRepository to create content:
http://world.episerver.com/documentation/Items/Developers-Guide/EPiServer-CMS/8/Content/Persisting-IContent-instances/
That way you can run it whenever you want and have access to EPiServers complete API. Also you can see progress of the import through the job status.
In the job you can read the content as a file in any format you like or directly from the source CMS database or some xml or RSS feed perhaps.
I have moved content from PHP, Java and .NET CMS this way. In .NET you could even access the source CMS via WCF or SOAP if available.
The ServiceApi is relatively new and more focused on Commerce products and media assets rather than CMS page and block content so I wouldn't use that.
There is complete documentation below for the ServiceApi by the way, did you not find it?
http://world.episerver.com/documentation/Items/EPiServer-Service-API/
Regarding language management you can read more in the below links:
http://cjsharp.com/blog/2013/04/11/working-with-localization-and-language-branches-in-episerver-7-mvc/
http://tedgustaf.com/blog/2010/5/create-a-new-page-language-branch-programmatically-in-episerver/
Basically you have two options for multiple languages. If the content is just straight translations you should create nine different language versions (branches) of the same page. You can also have multiple sites in an EPiServer installation but that requires 9 separate licenses (and the associated costs).
I've done a lot of EpiServer content migration projects. The easiest way if it's possible is to export your current sites tree in Json and then import that into EpiServer. I've had to do it on a recent project and mixed with Json.net it's pretty easy.
If you want to go that route you can find all the code to do it here: EpiServer Content Migration With Json.Net/
So our company has a large number of internal wiki sites for different departments and I'm looking for a way to unify them. We keep trying to get everybody to use the same wiki but it never works, they keep wanting to create new ones. What I'm wanting to do as an alternative is to scrape each wiki and create a new wiki with articles that has combined information from each source.
In terms of implementation I've looked at Nutch (http://nutch.apache.org/) and (http://scrapy.org/) to do the web crawling and using MediaWiki as the frontend. Basically I'd use the crawler as the front end to scrape each wiki, write some code in the middle (I'm thinking of using Python or Perl) to make sense of it and create new articles, writing to MediaWiki using its API.
Wasn't sure if anybody had similar experience and a better way to do this, trying to do some R&D before I get too deep into the project.
I did something very similar a little while back. I wrote a little Python script that scrapes a page hierarchy in our Confluence wiki, saves the resulting html pages locally and converts them into DITA XML topics for processing by our documentation team.
Python was a good choice - I used mechanize for my browsing/scraping needs, and the lxml module for making sense of the xhtml (it has quite a nice range of xml traversing/selection methods. Worked out nicely!
Please don't do screenscraping, you make me cry.
If you just want to regularly merge all wikis into one and have them under a "single wiki", export each wiki to XML and import the XML of each wiki into its own namespace of the combined wiki.
If you want to integrate the wikis more tightly and on a live basis, you need crosswiki transclusion on the combined wiki to load the HTML from a remote wiki and show it as a local page. You can build on existing solutions:
the DoubleWiki extension,
Wikisource's interwiki transclusion gadget.
I saw a tweet today referring to the MVCHTML5 helpers on Codeplex. I'm wondering if
Anybody has tried this out yet?
Does it add any real significant benefit over the default HTML helpers?
What are the actual HTML5 aspects of this library?
I would definitely recommend checking it (I am a little biased as I wrote it!).
But it's just a simple DLL that you include in your MVC project and it will give you all the benefits of HTML5 input types. If the browser doesn't support it - it will just fall back to a normal textbox.
To answer your questions though, it only adds a benefit if you are looking to add HTML5 functionality to your application or website. It uses the exact same syntax and the normal HTML helpers that ASP.net MVC comes with, but this just makes life easier if you are looking to add HTML5 functionality to your site.
Here is another link regarding HTML5 and the input types: http://diveintohtml5.ep.io/
I've just been trying it out, it doesn't seem to support the Required DataAnnotations for unobtrusive client side validation
Am currently trying to use for the first time both jqgrid and flexigrid to make database driven pages whos backend uses Zend Framework.
I have been googling and the search results that turn up aren't very helpful.
Any links that can be helpful?
Sorry I'm not fammiliar with the Zend framework, but as per a useful link for jqGrid documentation I find this wiki to be the best.
Edit: This is also a valid documentation resource here.
The Demo page on the official jqGrid site provides a lot of good examples and the code behind them.
Well worth checking out as it covers most things you can do with it.
I use Codesmith to create our code generation templates and have had success in learning how to use the tool by looking at example templates and the built in documentation. However I was wondering if there are any other resources (books, articles, tutorials, etc.) for getting a better grasp of Codesmith?
Have you checked the codesmith community site
We also have a great new collection of video tutorials available. You may want to check those out as well.
There is also a Google Code Codesmith section where you can download the latest updates of some CSLA, nHibernate and Plinqo templates.
Here is an interesting tutorial for building a data access layer using CodeSmith.
Depending on the templates you are using, we might have a separate website with tons of useful information like nettiers.com and plinqo.com. Also check out the help section on our community site.
We have also recently created a new WIKI (http://docs.codesmithtools.com) for all of our documentation.
Thanks
-Blake Niemyjski