Facebook like button doesnt show my images [closed] - facebook

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Improve this question
I have added two like-buttons to my images on my website but when you click like and you want to share them, the room for the picture is blank. How can I get the picture to show in the share box and to display on Facebook?
My website is www.firamedmariamontazami.se

Documentation:
href - the URL to like. The XFBML version defaults to the current
page. Note: After July 2013 migration, href should be an absolute URL
Can you confirm that href is an absolute URL?
If yes, try to include OpenGraph metas on your targeted pages (page specified in href) like:
<meta property="og:image" content="http://website.com/image.jpg" />
Do not forget to check the size of your image, you can find more details in this discussion
Edit:
Copy your link on the Facebook Object Debugger page and check what Facebook says about your image, that would help to get more details.
Update:
About your question:
Can I use different metas with different pictures? I want ten
different pictures on my page that you can share.
Look at this paragraph in the documentation, you can specify many images and your first image tag will be given the preference during conflicts. Keep in mind that when you share something on Facebook only one image will be displayed.
You may also be interested by setting other OpenGraph metas so if someone share your page (copy/paste your URL in Facebook) a nice summary of your page will be displayed (title, description, image...).

Could you try uploading a photo at a smaller resolution so that we can test? I'm wondering whether facebook is misbehaving because it's too large.

Related

Can I use http://schema.org/Article for an article preview? [duplicate]

Teasers on the front page of a blog surely are not the targets for us to add itemtype="http://schema.org/BlogPosting" to because each of them is not a full blog posting and is just one or two paragraphs with a "Continue reading" link instead.
But since they are part of a blog, is there any blog-related Microdata for them or not?
A person is still a person, even if you don’t provide the name. A place is still a place, even if you don’t provide the address. A blog posting is still a blog posting, even if you don’t provide the full content.
So using BlogPosting for a teaser is perfectly fine. If you don’t show the full content, just omit the articleBody property.

SelectPDF Tool to Convert HTML to PDF [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I am trying to use SelectPDF.ConvertURL() on my local machine. When I run this convert I get a PDF that has sizing issues with checkboxes. Checkboxes are too large and not appearing as they appear in the HTML Page
BUT when I go to the http://selectpdf.com/demo/html-to-pdf-converter.aspx and give it the URL for my HTML test document it renders correctly. URL for document is: http://dev.TitleClose.com/BlankLoanEstimate.html
I am using the exact code that is given on the selectpdf.com/demo page.
Any ideas or advice is appreciated
You might need to set a different web page width. Try this sample:
http://selectpdf.com/demo/convert-url-to-pdf.aspx
For an A4 page, the corresponding web page width (to keep the font size) is 793px. More details about the content resizing here:
http://selectpdf.com/docs/ResizingContentDuringConversion.htm
add in your CSS #mediaprint the sames propierties that you have in you css. that's work for me.
#media print {
…
}

Is it ok to store SEO relevant content in a database [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm building a website in the Zend Framework, and I'm using a layout page that gets applied to all of my pages. It's general structure is as follows:
<!DOCTYPE>
<html>
<head>
</head>
<body>
Content of individual pages comes in here...
</body>
</html>
Ideally I would like to put <title> and <meta name="description"> etc in this zend layout page, and then pull the content of these tags from my database dynamically depending on which page the content was coming from. Unfortunately, while google is happy to give me TONS of info on how to write title/description/etc tags, I haven't been able to confirm if pulling them from a database is ok. Is it? Am I thinking about this wrong? I'm worried crawlers won't be able to get this content. Is there a better way to associate titles, descriptions, etc with pages (other than just writing a head section in each individual page that contains this info)?
Thanks for the help! (I suspect this is a simple question, but I'd like to confirm the answer somewhere!)
Yes, it is OK to store titles and meta descriptions in a database.
It is not generally possible for a web browser or a web crawler to even tell whether information is stored on the server in a database, or whether it comes from static files. Google won't even know for sure that your titles and meta descriptions are stored in a database.
Pretty much every CMS system such as WordPress and Drupal store all the content (including titles and meta descriptions) in a database. It is very common practice.
The direct answer to your question is yes, you can store metadata in database.
Storing metadata in database doesn't affect whether Google crawlers will or will not crawl your pages successfully. As long as you write <title> and <meta> elements correctly with the info from the database on your pages you'll have no problems.

How to set print and save as pdf icon in TYPO3 pages [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Can anybody help me that how can i set PRINT and save as PDF icon(functionality) in TYPO3 pages ?
Thanks...in advance..
There are many examples in Google and I think that you should browse them, to find the one the best fits your needs.
In general 'historically' print version was suggested to be build with new PAGE cObject which typeNum is set to 98 (of course that's only suggestion) in general going this clue, you should find many examples and other resources by searching in Google for typo3 typeNum 98.
When you'll create alternative PAGE object (and maybe also use modified template for it) you need also add on your webpage a link which be the same as the current URL but with additional param &type=98, when user will click it TYPO3 will open the alternative version of the page. So you can add to this a JS in header which will also start system's print dialog.
You can also search the extensions repository and find something for placing the print button if you are unfamiliar with TypoScript.
PDF rendering is similar from point of view of the frontend, however most probably you need to use some additional lib, so it will be best to search for ready to use solution from the repo.
In general PDF version could be tricky, therefore from my experience I can say that nowadays it's sometimes better to avoid the PDF icon at all or use linking to some external service. Of course all depends on your needs. Remember that there are many programs which are able to create PDF's so if it is not required maybe it's no worth of its effort.
Finally take a look at the AddThis widget it can be also used for easy adding of icons for printing and online PDF creation, additionally you can also send invitations via e-mail, or even share the link on the hundreds social portals. And what's most important installing this is just like adding view lines of HTML code vie TypoScript.

Ethics of blocking external hotlinking [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm just looking through some of the webmaster stats that Google provides, and noticed that the most common links to our website are to some research articles that we've put up in PDF format. The articles are also available on the site in HTML.
I was looking at the sites (mostly forums and blogs) which link to these articles and was thinking that none of the people clicking the links would actually get to see our website, and that we're giving something away for free and not even getting some page views in return.
I thought that maybe I could change my server settings to redirect external requests to these files to the HTML version. This way, the users still get the same content (albeit in an unexpected format), and we'd get these people to see our website and hopefully explore it some more. Requests coming from my site should be let through to the PDF. Though I don't know how to set this up just yet (keep an eye out for a follow-up question here), I'm sure this is technically possible. The only question is: is that a good idea?
What would you consider the downsides of redirecting traffic from external sources such that they see our site, not just get our content? Do they outweigh the benefits?
The only other alternate option I can see is to make our branding and URL much more visible in the PDF files themselves. Any thoughts?
Hopefully your PDFs are equally branded so that visitors will feel compelled to search further in your website. That might be just as important as having visitors briefly stop-over at your website.
I'm usually opposed to all such redirects as harmful to usability. However, in this case a basic content-type negotiation takes place and this might be acceptable. However, make sure that this doesn't break downloads of the PDF documents for users who might have disabled their referers in the browser (I do this, for one).
Sure you could cut them off, but there is a bigger issue at play: Why aren't these people finding you before they are finding these moocher sites?
Possible reasons are:
a) they did find your site, but not the content they were looking for, even though its obviously there, or
b) your site never appeared in their search results.
You may want to consider a site redesign in order to address those concerns before cutting off what appears to be a reliable source of information about your target audience (for you and the people who get your PDFs from elsewhere).
In the meantime, I would suggest you allow the traffic, add a cover page to all of your PDFs that are basically a full-page ad for your site and then enlarge the font on the copyright section of each page so the authorship is very prominent. You have a built in audience now, they just don't know it yet. Show them where the source is.
Eventually, the traffic will come to you and know you as a reliable source for that information.
I would do it. It's your site and your data.
The hot-linkers are essentially 'guests' and you can make the rules for your guests.
If they don't like it, they don't have to link.
I would add a page at the beginning of each article with info about the website, the current article and links to other articles on your website.
I find it more convenient than redirecting the user to a page on your website(that's annoying). Most people right click and download PDF files, what would that do when your redirect ;)
I think the proper thing to do in this situation is to leave the redirects. Here's why:
There's nothing worse than expecting to go somewhere/get something and not getting it (the negative impact would outweigh the positive.)
Modify your content to add a footer such as: "like what you saw, we've got more, check us out at www.url.com"
If your content is good, users will check out your website. These are the visitors you want, they're more likely to stick around and provide your site with value (whatever that may be.) Those that you've coerced may provide you with an extra click or two, but you will likely not see any value given back to your site.
Look at other successful sites that give something away for free: Joel on Software, Seth Godin, Tim Ferriss, 37Signals. The long term will provide better, more consistent value than the short term.
If you go for this solution, see if redirecting to the HTML version also changes the file name displayed by the browser if somebody used 'save as' on the link, else an HTML page would be saved with a pdf extension. Apart from that, I can see no reason why you shouldn't do it.
As an alternative, see if you can add a link to your site to the top of the pdf file. This way they are reminded where it comes from even if someone else sent it to them by email.