Is there any existing way to generate a what-links-here report for a gollum wiki? In other words, a list of the pages within the same wiki that link to the current page: a list of the local inbound links.
I wasn't able to spot any feature like this, nor find anything suitable in the API, but I may have missed it. Is there a third party add-on for it?
I do understand the reason it probably doesn't exist in the core: as these are plain text files, there isn't any table of links maintained anywhere. For the same reason, when a page is renamed it breaks all the inbound links to that page from other pages.
A function for this could use the API to read the generated source of each page (so that only html with normalized names needs to be parsed), producing a list of the local links from each page and the page they are on. Cache the results at page level until the next commit of that page.
This could be used to enhance the existing page rename feature as well. Has anybody already done this?
Related
I've been using import functions a lot in my sports modeling, but I've never been able to figure out how to get past the issue of pulling information that is dynamically imported from another source.
For example, i'm trying to use importxml to pull the money line values in this link here: https://www.sportsbookreview.com/betting-odds/nfl-football/money-line/
I can get the information in the left columns up until "PINNACLE", and after research i now understand I can't get the rest of the information because it's not static on the page and I need to go to the source... how do I find the source of this information so I can pull it from there?
I tried inspecting the page, clicking on "network", clicking on "XHR", refreshing the page and previewing the results, but nothing seemed to match.
Am I looking in the wrong place?
The page uses websockets to download the data, so I don't think you could simulate it in Google spreadsheets using formulas (maybe it could be possible in a script). However in this particular case there is a 'classic view' variant of the page which includes all the data in its source:
https://classic.sportsbookreview.com/betting-odds/nfl-football/money-line/
I want to build a website, maybe similar to a movie database, where every page has, say, actors, director, year (it seems that Lektor can deal very well with such structured metadata), and I am thinking about how to realize internal links between pages on that site.
Say I have a text such as
just like in [his previous movie](link), he shows again ...
then I guess I could use the absolute path of the linked page as link target, but that makes me very inflexible with respect to changing URL structure. Can I somehow just use the ID of the target content?
Or, better yet, can I somehow automatically obtain the title of the linked page?
just like in his previous movie <<link:title>>, he shows again ...
Can I use the standard Markdown blocks for that or would I have to add some handcrafted database lookup logic?
if some contents will be changed in future. I think you can use the databag feature to implement it. you just modify the databg in case changed is need.
For the construction of an extensive list of links, since the source page is a thematic portal, I am looking for a suitable EXT., Which also runs under TYPO3 7.6 LTS.
it if the list of links to a permits the use of categories and multiple categorization of links is possible would be nice. should Weiterrhin the links are described not only the destination address and an alias but here should still an outline of the target page (possibly with photo) be possible.
Additional functions such as proposing links by users, reporting broken links or even a User Voting would nice additional features.
There were times the Modern Linklist, but they were no longer being developed for TYPO3 <6.x.
Is there perhaps somewhere an alternative or as one might like to vorhnandenen solutions might realize? It would be nice of course, without any programming knowledge, since I'm not a programmer.
P.S .: It is not about building a spam list but high quality links with topics relating to the original page.
As this seems to be a straight forward usage you could try to build that extension by yourself with the ExtensionBuilder.
just build up the records neccessary for your data. and let the EB generate all usefull actions: list & show, even create, edit, delete in FE would be possible.
Afterwards you just need to edit the generated fluid templates.
these links may help:
Overview
EB manual
small remark: if you want the newest code state, use the EB from git instead of TER
I`m not aware of an existing extension for it but it could be a good project to learn extbase / fluid.
You should also take a look at
typo3/sysext/fluid_styled_content/Resources/Private/Partials/Menu
and
typo3/sysext/fluid_styled_content/Classes/ViewHelpers/Menu
Fluid Content contains everything you need to create a list like that, you "just" have to combine the necessary bits and pieces.
You can do a lot with TYPO3 core functionality: there is a page type "external URL", pages can have categories by default, there are plenty of menu options (TypoScript HMENU, menu content elements, Fluid menu Viewhelpers). The Linkvalidator can periodically check all links and report broken links.
For suggestions you could add a form. Powermail for example can also store submitted info in database records, so your visitors could prepare page records (they are hidden until you make them visible).
I've got a staging and live site I'm working on (not my code base). I've accidentally replaced the live server with some staging code (no backup (slap me)) and I'm getting weird urls for articles on the sites 'blog' page.
Basically everything's being called into the page correctly but the page header link is being screwed.
Rather than being
http://www.example.com/a-nice-url
it's giving me
http://www.example.com/news,recent,pr,etc
which appears to be the list of categories of the article.
Where/How can I easily fix this?
I'm only calling [[*content]] and can't find where that is.
Linking to an article I know is there with the correct url works still.
any ideas would be greatly appreciated.
I assume your blog page has some sort of listing somewhere, maybe a getResources call? If you can't find it in your blog list template (as you're saying you only see a *content), it means the list is probably "hardcoded" in the blog list resource content field.
You'll want to find the chunks being used to output each blog entry on the lists and check which page parameter is used to construct the link. It should probably be *alias, and if it is and your aliases are correct you have some deeper trouble going on.
On a website, I have a section where I put a new page every week. I'd like to convert this to a system using tt_news. How do you suggest me to import the pages (more than 100 pages) to tt_news? Can I do it using a simple SQL query, or should I write a custom PHP script to perform the importation? Is there already an extension that exists that could help me performing this task?
It doesn't really matter to me if I simply build news liked to existing pages, or if I transfer the content of the page to the content of the news. It would be great if I can convert the page title to the news publish date, but I could use the page publishing date as well.
What do you suggest for performing this task?
I would do it via SQL as you mentioned already. Could get tricky if you have multiple content-elements per page that needs to be merged into one tt_news-dataset.
You could also install an extension, that links tt_content-elements with tt_news-records. This way you only have to insert the tt_news-records by traversing the pages and link the content per page to the new tt_news-record. Here are some extension, that link tt_news with content-elements:
ttnews_irre
aba_ttnews_content_con
Here is also an extension that could be worth a look: content2news.
Hope this is useful to you.
Best regards,
Peter