In my TYPO3 site, we create news in german and translate it to english.
On my search for canonical urls for the news component, I found these helpful answer here:
https://stackoverflow.com/a/53836388/20815643
This solution works fine but not completly free of problems.
I use the code and the 2 lines routeFieldPattern and routeFieldResult create links like this:
abc.de/news/hallo-welt-2022-1763
This works without translation and the canonical URL are correct. But if I translate my news, then the correct uid have to be 1764, but the router create a link with the original news uid (1763):
abc.de/en/news/hello-world-2022-1763
The correct link should be this:
abc.de/en/news/hello-world-2022-1764
I looking for an solution to handle also the language id in the router config, but I stuck.
How can I use the language id to get full correct link?
I search for more parameters to the routeEnhancer, but found nothing.
Related
it is possible to have a direct path to a special news entry?
example:
my link is: http://www.domain.de/start/topnewsdetail/news/really-long-name-of-news-entry.html
and it would be nice to have
http://www.domain.de/newsEntry.html.
Can someone give a hint?
it is a little bit complicated if you want a general automatic solution.
you can do it by hand if you insert pages of type 'external url' where you insert the long path as external url.
with realurl you have problems as realurl at least will use one path segment for the page with the detail view before the last segment which is for identifying the news record. AFAIK coolurl can ommit the path segemnt for the page.
on the other hand: make sure the news identification (title, subtitle?) is unique and does not collide with pathes for normal pages.
at last you can use .htaccess rewrites, but that needs to differentiate between short urls for news and short urls for top-level pages. So those urls will show the page, those urls are not generated inside of TYPO3 and so nowhere used (except manual)
this EXT. adds a custom link to records like system category or news:
https://typo3.org/extensions/repository/view/recordlink
It'S deployed for TYPO3 6.2 but perhabs it'S helpful to create an own EXT.?
I've came across this page https://www.tumblr.com/examples/share/sharing-links-to-articles.html which shows a possible way to customly create a share URL for tumblr.
Simplified version of what they have:
Click to share
http://jsfiddle.net/m5ow6bhs/2/
This will take you to the log in page or straight to the share page if you're already logged in. However, if you change the http%3A%2F%2F part to a simple http:// it will now load to a "Not Found Page". http://jsfiddle.net/m5ow6bhs/3/ What the hell Tumblr?
Do you guys have any idea what's going on or what's the correct code to share something to Tumblr?
Cheers.
As with most share services, the URL should be passed as an encoded string. This supports the OPs comments about http%3A%2F%2F(encoded) and http:// (raw).
Tumblr provides variable transformations in the theme operators to handle encoding, but sadly it doesn't work with custom variables.
One quick solution is to drop the http:// part. Example: http://jsfiddle.net/L9jd8dhz/
I have discovered as of recently that the share URL needs to be updated as such:
https://www.tumblr.com/widgets/share/tool?shareSource=legacy&canonicalUrl=<-urlencode(share_url)->&posttype=link
The &posttype= seems to be a new requirement to make the share work correctly.
I am developing a site which is supposed to get the news content of other sites, something like this. but without redirecting to the host for reading the news content.
now the problem is that I don't know what is the best way to get the content completely. I know that I can use RSS feed for each site but it has only a short description of each news not the whole story. I have also read the related questions in SO like these:
How to get the full content from the rss feed in javascript
How to extract the full content from a partial content rss
but none of them solved my problem .
now I wanna ask what is the best way to get the whole content of news from different sites if it is necessary to go directly to them?
I am sorry because of bad english and if my question is not clear enough I can explain it even more
thanks in advance
You could use web scraping library like boilerpipe to extract content from news sites, but scraping breaks easily(if the target site changes layout for example) and there might be legal issues in extracting full content from other sites and displaying in yours.
Edit: I tried boilerpipe api demo and the library seems very smart at extracting articles from web pages.
I'm working on a WP based site using the plugin WPML, and I've created a custom post type that has posts in two languages, English and Swedish. I need to be able to turn off the redirecting to the CPTs default name. For example, say I have a CPT called "references". In order to keep nice Urls I have links on the Swedish page that says "referenser" instead.Now, for instance accessing http://mysite.com/referenser/example-post/ takes me to http://mysite.com/references/example-post/ .
While I retrieve the content anyway (WPML recognises the translation) this creates a little mess including the plugin on an ajax based site. Is there any way to turn off this redirect, or possibly set a translation name for the CPT?
Posted about this on the WPML support page but haven't got any response.
Using the WPML Strings Translation module, you should have been able to translate the CPT slug-name so that the link will be translated instead of just copying of the English name.
Did you get help from the WPML forums? I'd like to followup if you did not.
Keith-R
WPML Community Manager
When you share something on Facebook or Digg, it generates some summary of the page. How would I do this in Perl? What algorithms are there?
For example:
If I go to Facebook and tried to share this question as a link:
How can I create a website summary with Perl?
It retrieves "Facebook/Digg get website summary? - Stack Overflow" as the title (which is just the title of the page) and [... incomplete question?]
CPAN is your friend.
Some promising looking modules:
HTML::Summary
HTML::SummaryBasic
Lingua::EN::Summarize
Assuming you mean sharing a link...
Usually the summary is written by the user submitting the URL. If you have to write a summary automagically this can be achieved by:
Using the first 100 or so characters of the document body (in itself not easy)
Using metadata like the description or keywords (often empty or spammed)
Context-relevant summaries like recreating Google snippets (sorry its PHP but simple)
Tags/keywords from the document using something like the Yahoo Keyword Extractor API or your own keyword density function
Your best bet is to ask the user!
Hope that helps somewhat :)
Basically you want to scrape the URL and find the "most significant paragraph" which might be the first <div> or <p> element after the first <h2> or <h1>, depending on the layout of the page.
You could check and see if there is a meta description on the page, but that leaves you at the mercy of whoever wrote the meta description.