i have a school asignment on which I am supposed to create about thousands of posts on my GHOST CMS. I have tried to automate this process using Tampermonkey scripting but to no avail since my skills are not that great. How would I go about to succeed with my task?
Related
I’m building a super simple website with 5 pages and I want a CMS that allows me to change the text and the pictures in a couple of them.
In the past I used wordpress, but it has way too many features that i don’t need in this case.
I’ve been trying to learn gatsby.js so I would like to build it on that, but trying to see how to source from Netlify-CMS I started facing an overwhelming amount of information which I'm not sure I need.
Any tips?
Thanks!
M
Netlify has a built in CMS, and it's compatible with Gatsby! You can find examples online. It should be good for smaller sites, but for larger projects, I really like Prismic.io. Contentful is another popular one, but it's a bit pricier than prismic.
Edit: reread your comment about sourcing from Netlify. Netlify is not a "source" plug in in Gatsby. You use a local file +markdown source, and do the configuration for netlify, which adds an admin interface at an endpoint. You configure your data models in the interface, create login, etc. Then, when you submit changes, it modifies files in your connected git repo, so the local file + remark will make the data available in the graphql queries.
In the end I used Forestry.io, a good simple solution that did exactly what I needed in combination with Jekyll.
I recently got a simple task to create one-page application with feeds showing from API. It should be possible to add, edit, delete feed. Add/remove like etc.
I spent last few hours trying to figure out how to start. I created several websites using HTML, CSS and some jQuery or Bootstrap.This is new to me.
API is here: http://fe-zadanie.herokuapp.com
Available: GET, PUT, DELETE
I do not want somebody to write code for me. I just need some little nudge on how to start. I found various sources to study but I cannot find something usable to study and to start from.
Any help would be nice, thank you.
Hopefully someone can help me, I'm new to EPiServer and have been given a data migration task. We are using the latest version 8.5. I need to migrate content from a clients home grown CMS (that luckily is in a tree like structure) to EPiServer. There doesn't seem to be a whole lot of information about this on the web - perhaps I just don't know the right thing to search for.
It looks like using the EPiServer.ServiceApi might be the route to go but again locating useful documentation is proving difficult.
I was thinking of setting up the client CMS in SQL Server and writing a simple console application to call the EPiServer.ServiceApi inserting the content. If anyone has any information on this or better still and example i would be very grateful.
Thanks,
Dan
If you are just importing content from another CMS I would write a scheduled job in EPiServer:
http://world.episerver.com/code/dannymurphy/Stoppable-Scheduled-Job-with-feedback/
That job then uses the standard IContentRepository to create content:
http://world.episerver.com/documentation/Items/Developers-Guide/EPiServer-CMS/8/Content/Persisting-IContent-instances/
That way you can run it whenever you want and have access to EPiServers complete API. Also you can see progress of the import through the job status.
In the job you can read the content as a file in any format you like or directly from the source CMS database or some xml or RSS feed perhaps.
I have moved content from PHP, Java and .NET CMS this way. In .NET you could even access the source CMS via WCF or SOAP if available.
The ServiceApi is relatively new and more focused on Commerce products and media assets rather than CMS page and block content so I wouldn't use that.
There is complete documentation below for the ServiceApi by the way, did you not find it?
http://world.episerver.com/documentation/Items/EPiServer-Service-API/
Regarding language management you can read more in the below links:
http://cjsharp.com/blog/2013/04/11/working-with-localization-and-language-branches-in-episerver-7-mvc/
http://tedgustaf.com/blog/2010/5/create-a-new-page-language-branch-programmatically-in-episerver/
Basically you have two options for multiple languages. If the content is just straight translations you should create nine different language versions (branches) of the same page. You can also have multiple sites in an EPiServer installation but that requires 9 separate licenses (and the associated costs).
I've done a lot of EpiServer content migration projects. The easiest way if it's possible is to export your current sites tree in Json and then import that into EpiServer. I've had to do it on a recent project and mixed with Json.net it's pretty easy.
If you want to go that route you can find all the code to do it here: EpiServer Content Migration With Json.Net/
Apologies, a fairly broad request eith little to offer. I would like to be able to create sections and page in onenote from a list, say in excel, using Powershell.
Any help would be much appreciated.
Not sure if this is still current enough to work properly without knowing what version of Powershell you are running, but here are a few pages that could help you out.
http://blogs.technet.com/b/jamesone/archive/2007/10/02/powershell-and-one-note-no-really.aspx
http://bdewey.com/2007/07/18/onenote-powershell-provider/
Essentially this gives you a OneNote provider that would let you navigate and create One-Note notebooks and pages as you would with any other powershell provider. Could help you out with what you are doing.
So our company has a large number of internal wiki sites for different departments and I'm looking for a way to unify them. We keep trying to get everybody to use the same wiki but it never works, they keep wanting to create new ones. What I'm wanting to do as an alternative is to scrape each wiki and create a new wiki with articles that has combined information from each source.
In terms of implementation I've looked at Nutch (http://nutch.apache.org/) and (http://scrapy.org/) to do the web crawling and using MediaWiki as the frontend. Basically I'd use the crawler as the front end to scrape each wiki, write some code in the middle (I'm thinking of using Python or Perl) to make sense of it and create new articles, writing to MediaWiki using its API.
Wasn't sure if anybody had similar experience and a better way to do this, trying to do some R&D before I get too deep into the project.
I did something very similar a little while back. I wrote a little Python script that scrapes a page hierarchy in our Confluence wiki, saves the resulting html pages locally and converts them into DITA XML topics for processing by our documentation team.
Python was a good choice - I used mechanize for my browsing/scraping needs, and the lxml module for making sense of the xhtml (it has quite a nice range of xml traversing/selection methods. Worked out nicely!
Please don't do screenscraping, you make me cry.
If you just want to regularly merge all wikis into one and have them under a "single wiki", export each wiki to XML and import the XML of each wiki into its own namespace of the combined wiki.
If you want to integrate the wikis more tightly and on a live basis, you need crosswiki transclusion on the combined wiki to load the HTML from a remote wiki and show it as a local page. You can build on existing solutions:
the DoubleWiki extension,
Wikisource's interwiki transclusion gadget.