I'm running into a tedious issue.
I have a VISIO workbook that contains at least 100 pages, my issue is that I have a textbox that makes reference to multiple pages. EXAMPLE: on page 10, I have a textbox that that says: "SEE PAGE 45". The problem is that my document changes page count throughout a project quite often. Therefore, if I add another page before page 45, well its not page 45 anymore, it is page 46, hence, now I have to go back to page 10 and other pages that make reference to page 45 and change the text to page 46!
This is very nerve wrecking! I tried using "User-defined Cells" with ="SEE PAGE "&PAGENUMBER() but that gives you the page number of current page that the shape is located on. Not what I want....
NUTSHELL:
I want page 10's textbox referencing page 45 and if page 45's location changes (i.e. changes to page 46 or 50 or etc) I want that text (on page 10) to update to page 45's new location automatically.
Please help!
Thank you in advance!
Tony
Please check same thread in Russian visio forum (via google translate service)
Related
I am new to TYPO3 and have a big problem. I deleted the page with the id 1 (startpage, I know its stupid) and now I woudl like to know if it is possible to restore the page somehow.
Install extension Recycler, it ships with TYPO3 by default. You can then restore pages and content you deleted using the backend UI it has.
https://docs.typo3.org/typo3cms/extensions/recycler/Introduction/Index.html
You can use the recycler that is found under the web module and click on a page in the page tree at a higher level than the deleted page. You can also set the depth to infinitive if not certain where the page used to be in the page tree. Then put a tick beside the page you want to restore and click undelete. Note that PID is 'Parent ID' (not Page ID), and UID is the ID of the deleted page.If you click on the little + you will get additional information, e.g. the original path to the deleted page. If part of the path is highlighted in red, then that indicates another deleted branch or page. Remember to leave the Recycler module with 'Depth' NOT set to 'Infinite' (leave it at 'This page' or '1-4 levels') otherwise the recycler will hang if you use it on a larger part of your page t ree. Hope this will help you.
I have noticed this numerous times, and have yet to find a fix; when using the share link (http://www.facebook.com/sharer.php?u=MyUrl&t=titleInfo), it pulls in the image, page title, and description from MyUrl just fine. However, it seems to ignore &t, and just uses the page title regardless. That is not the big issue though.
The problem is, if I totally change my page title and opening text, the share link won't update. It seems locked on whatever the page had on it the first time I tested the share link. Is there a way to make it refresh?
In my instance, I was updating some site pages from last years contest edition. Before I got the main page updated, I was working on the FB share link on a different page. I clicked it to test if it worked with the new graphic, and it did, but it of course pulled up the 2012 page content.
Then I went and updated all of the page content to make it for this contest (same URL) and now no matter who tests it, it is still pulling up last year's info (page title and description). It is as if FB has locked that info in and no matter who shares my link, it always pulls up the old text.
I have seen this before with YouTube links. Someone has shared one, I go in later and change the title, and no matter who shares it it never updates because that first initial share text seems permanently locked in FB.
Does anyone know how I can make it refresh, without having to make a whole new URL from last year?
actually facebook gets your page information at the time of posting and save it in their database. so next time it fetches that url and information from their database not from your page.
you may have to share that page again to make new entry in database of the facebook. but old one will remain there.
You can force Facebook to clear their cache by using their debugger. Enter the URL in question at https://developers.facebook.com/tools/debug. You need to do this for each page you are working on.
There are also some rules about not changing the info they have if there are more that (50)? likes.
I know that this topic has been discussed before in varying extent but I have some specific queries. I will use an example for this case and would like to request you for your views.
Example:- A home finance management website. There are two pages. The basic page after login is an empty page with a text box. Type in "Rent" and rent details and trends pop up. Type in "Bills" and bill details and history pop up. The data shown to user is different of course.
Now -
1. If I place an Adsense script in the basic home page where I just have a text box, will it be disqualified for not having enough content ?
2. Even if the content changes (AJAX), does the ad change to suit the content ? Does the crawler keep a constant check of index the pages after defined intervals and whatever it finds there is kept and searched for keywords ? The same page may show different content to different users and hence have different keywords. (Also, since login would be cookie based, how does crawler see this page ?)
Edit -
I know from HERE that Google does take AJAX calls into account but since the results would be dynamically populated by accessing a database and while populating unique data, the bot looking at the form action page doesn't help much, does it?
3. Google prefers GET method. So if I go like this - xyz.com?show=rent / xyz.com?show=bills, the page is regenerated and the script reloaded but each time the crawler sniffs any one of the two pages, it might see different content for different users. What does it do ?
4. If I do not reload the page by form submission and the page is not regenerated every time, can I call a function to document.write the div I am putting the ad in ? Would that make it re-sniff the page ?
Any help is much appreciated.
Could anyone explain me what is the Wicket's page versioning useful for? There is an article in the FAQ that is related to this topic:
Wicket stores versions of pages to support the browser's back button.
Suppose you have a paging ListView with links in the ListItems, and you've clicked through to display the third page of items. On the third page, you click the link to view the details page for that item. Now, the currently available state on the server is that you were on page 3 when you clicked the link. Then you click the browser's back button twice (i.e. back to list page 3, then back to list page 2, but all in the browser). While you're on page 2, the server state is that you're on page 3. Without versioning, clicking on a ListItem link on page 2 would actually take you to the details page for an item on page 3.
But unfortunately I don't understand it at all. When I click on a ListItem on page 2, I would expect to get to the page defined by that Link - details page for the item. Why should I get on the details page of the item on page 3?
Moreover, when one press the back button in a browser, it doesn't call the server at all. Is it right?
So how this versioning works?
No, the server is not notified when you press the back button. I'll try to explain what happens in the example:
You access you application for the first time. On the server, a page 'list' is created, used to render the HTML you see in your browser, and stored as page v1. You see the first 10 items of the list.
You click the 'next' link, and it refers to a link in page v1. On the server, page v1 is loaded, the link logic is executed (to advance the pagination), the page is used to render HTML, and is stored as page v2. You see items from 11 to 20.
You click the 'next' link, and it refers to a link in page v2. On the server, page v2 is loaded, the link logic is executed (to advance the pagination), the page is used to render HTML, and is stored as page v3. You see items from 21 to 30.
You click the 'details' link for item 25, and it refers to the link for the 5th item in page v3 (this page only shows 10 items, and the link, even if it refers to the 25th item in the complete list, in this page it's just the 5th). On the server, page v3 is loaded, its logic is executed, page 'detail' is created, stored as page v4, and you are redirected to it. Your browser requests page v4, the server loads it, and uses it to render your HTML page (no new version is stored, since it's just rendering). You see the details for item 25.
You click the browser's 'back' button 2 times, and see page 'list' showing items 11 to 20, referring to page v2 (list). Then you click a link 'details' for item 13. On the server, page v2 is loaded (not v4, the last one executed), since the link clicked pointed to this page version. Then, the 3rd item link's logic is executed, a new page 'details' is created, stored as page v5, and you are redirected to it. The browser requests page v5, the server loads it, and uses it to render your HTML. You see the details for item 3.
All this may seem strange if you come from a Struts-like background, where you always just put the item id or which page to show as a link parameter. In Wicket, the usual case is to store all state in the server, and navigation is not done by the client (direct link to another page passing parameters), but in the server. A link just asks the server to execute code in a page object version, the navigation is done all server-side.
You could argue that the Struts style is simpler (and you can do it in Wicket too, it just isn't optimal), but keeping the state only in the server has many advantages. Fist, once you get used to it, it's actually much easier. No need to add every single param to a pagination link (search parameters, first item, page length, sorting column, order direction, etc.). Also, you avoid many security issues (you can't just change the URL id param to an arbitrary value and access other users' data), and can control everything from Java code instead of mixed Java-Javascript (you still can do it if you want, though).
So I created a nice 6 page website hutchspropertyandtree.co.nr using freedomain.co.nr via dropbox public folder. Everything was working and updating properly until i updated with iwebs SEO TOOL. I added meta and title tags as well as description etc... PROBLEM is that even though my .html files in dropbox are correct and show all new code and tags. when i open up my domain hutchspropertyandtree.co.nr it doesnt show any of my recent seo tool updates.
im thinking that the cheap domainname from .co.nr is the problem? Is it possible that the default tags and titles and keywords entered into the co.nr website creation boxs are overwriting the newer ones in the html within my dropbox?
But still doesnt explain why a stat counter code and google analytics code in the footer and header respectively still do not show up when i view source in browser.
PLEASE PLEASE HELP.
It's because the page at hutchspropertyandtree.co.nr uses a frame to show the content from another location. The meta information comes from the page with the frame, not the page in the frame. You should be able to see the content of the frame using an inspector (comes with all browsers these days) or "View frame source", if your browser does that.
Note that any search engine hits to your pages will link to the dropbox URL, not the frame page (that has essentially no content from the viewpoint of a search engine). If you want search engine results to show up under that domain, you'll have to get hosting that lets you point a domain directly to it.