Can I add non-path pages in OpenAPI 3.0.0? - openapi

I want to migrate a documentation to OpenAPI 3.0.0 to be uploaded to readme.com.
In the documentation I got some pages explaining general concepts, that are not linked to paths.
These pages should show up as separate pages in readme.com.
Is it possible to add pages like that? If so, how?

Related

Best way how to publish OpenAPI document on GitHub (readme.md)?

I have a project hosted on GitHub. To document the API I am using the OpenAPI spec. Now I want to add a link on readme.md (on GitHub) that refers my visitors to the OpenAPI document for good user experience.
Far as I see I have two solutions:
http://editor.swagger.io/?raw=https://raw.githubusercontent.com/path/to/file.yaml
https://app.swaggerhub.com/apis/(username)/(api-name)/(api-version)
Both approaches are working, but they both open with an editor on left side that shows the content of YAML file and is, IMHO, wasting a significant screen space. Not really what I want. Is there an option to display the OpenAPI document without editor opened? Just similiar what is done for https://petstore.swagger.io/ which comes without editor opened.
Or ... is there maybe an option available to display the OpenAPI document on GitHub directly?
Thanks, Christoph
If you use SwaggerHub, replace /apis/ with /apis-docs/ in the URL to view just the API docs without the editor part. For example:
https://app.swaggerhub.com/apis-docs/swagger-hub/registry-api/1.0.47
Or if your OpenAPI definition is hosted elsewhere (e.g. on GitHub), you can use
https://petstore.swagger.io/?url=https://path/to/file.yaml
to load it into the public Swagger UI demo. Swagger UI renders API docs without the editor part.

Dokuwiki - IndexMenu Plugin: How to create pages meaningfully?

I've started using IndexMenu to implement an automatically updating index for my wiki. Now, I have implemented indexes on all front pages of my (sub)namespaces, but have run in to the problem of how to meaningfully add pages at this point.
To me, the whole idea of creating pages in a wiki was to create an empty link on an index page, follow the link and fill it with content. Now that the indexing is being done by the plugin, I wonder how I can still create pages within the wiki backend without resorting to create the pages in the actual folder structure.
Am I just completely missing something here?
I think I'm having trouble understanding the question, you can still create pages by creating/following a link - but you might find this plugin useful:
https://www.dokuwiki.org/plugin:addnewpage
If you install it and add {{NEWPAGE}} to your sidebar.txt, users will be able to create a page anywhere they have permission for.

Ghost: Display Posts From Multiple Tags On One Page

I'm developing a website for a magazine using Ghost (http://ghost.org/) and would like to have pages that display posts from two related tags. Eg. "Science and Environment". I understand that when using a static page you do not have access to posts so I cannot for example do this. This would however be the ideal solution.
{{#foreach posts}}
{{#has tag="science, environment"}}
do thing
{{\has}}
{{/foreach}}
I have had a look on the Trello roadmap (https://trello.com/b/EceUgtCL/ghost-roadmap) but couldn't spot anything there. I would appreciate any help on a workaround.
Cheers
This is possible but a bit tricky
You should install self-hosted Ghost . There is a lot of step-by-step manuals how to do this on Amazon, DigitalOcean, Heroku, etc.
You should create your own custom Handlebars helper for your purposes.
Create myhelpers.js in the Ghost project root and put your own helper code here. For example: {{bytag}} helper which selects posts by one tag. You can extend this to query posts by more than one tag.
At the beginning of config.js place require('./myhelpers')();
to activate your custom helper.
Restart Ghost

How to scrape and virtually combine wiki articles?

So our company has a large number of internal wiki sites for different departments and I'm looking for a way to unify them. We keep trying to get everybody to use the same wiki but it never works, they keep wanting to create new ones. What I'm wanting to do as an alternative is to scrape each wiki and create a new wiki with articles that has combined information from each source.
In terms of implementation I've looked at Nutch (http://nutch.apache.org/) and (http://scrapy.org/) to do the web crawling and using MediaWiki as the frontend. Basically I'd use the crawler as the front end to scrape each wiki, write some code in the middle (I'm thinking of using Python or Perl) to make sense of it and create new articles, writing to MediaWiki using its API.
Wasn't sure if anybody had similar experience and a better way to do this, trying to do some R&D before I get too deep into the project.
I did something very similar a little while back. I wrote a little Python script that scrapes a page hierarchy in our Confluence wiki, saves the resulting html pages locally and converts them into DITA XML topics for processing by our documentation team.
Python was a good choice - I used mechanize for my browsing/scraping needs, and the lxml module for making sense of the xhtml (it has quite a nice range of xml traversing/selection methods. Worked out nicely!
Please don't do screenscraping, you make me cry.
If you just want to regularly merge all wikis into one and have them under a "single wiki", export each wiki to XML and import the XML of each wiki into its own namespace of the combined wiki.
If you want to integrate the wikis more tightly and on a live basis, you need crosswiki transclusion on the combined wiki to load the HTML from a remote wiki and show it as a local page. You can build on existing solutions:
the DoubleWiki extension,
Wikisource's interwiki transclusion gadget.

Converting content pages to tt_news (Typo3)

On a website, I have a section where I put a new page every week. I'd like to convert this to a system using tt_news. How do you suggest me to import the pages (more than 100 pages) to tt_news? Can I do it using a simple SQL query, or should I write a custom PHP script to perform the importation? Is there already an extension that exists that could help me performing this task?
It doesn't really matter to me if I simply build news liked to existing pages, or if I transfer the content of the page to the content of the news. It would be great if I can convert the page title to the news publish date, but I could use the page publishing date as well.
What do you suggest for performing this task?
I would do it via SQL as you mentioned already. Could get tricky if you have multiple content-elements per page that needs to be merged into one tt_news-dataset.
You could also install an extension, that links tt_content-elements with tt_news-records. This way you only have to insert the tt_news-records by traversing the pages and link the content per page to the new tt_news-record. Here are some extension, that link tt_news with content-elements:
ttnews_irre
aba_ttnews_content_con
Here is also an extension that could be worth a look: content2news.
Hope this is useful to you.
Best regards,
Peter