FMP URL Format in Filemaker WebDirect - filemaker

I am trying to use some webviewers on Filemaker WebDirect. I would like to call a script in filemaker from a button on the webviewer. It works great in pro but I'm not sure of the url protocal I need to use in web direct. What is the format I should use when trying to call a filemaker script from a webviewer in webdirect?

You'll need to test this out because I haven't tested it, but I believe that when you use a webviewer in a WebDirect solution the webviewer actually displays an iframe tag and the contents of the webviewer works like any other iframe on a webpage, i.e. it's no longer part of the WebDirect application.
You can check it out in the browser interface of your WebDirect solution by right-clicking on the webviewer and selecting "inspect element". That should open up the browser's developer tools and show you the webviewer's element in the overall HTML structure. The webviewer should be an iframe.
All of that said, if it does treat the content of the webviewer as an iframe and therefore the content of the iframe is outside of your WebDirect solution, it means that whatever communication you have from the webviewer content would need to communicate externally with your FileMaker Solution via XML custom web publishing. It would be like standing inside of your house, reaching out of a window, unlocking your front door, and grabbing something you want.
This means you would need:
Web Publishing turned on on your FileMaker Server
A user with the xml extended privileges enabled
An external IP address that you can access your FileMaker Server by
Then you could (again, theoretically I have not tried doing this) use a link that contains a url with the XML custom web publishing syntax to perform the script. You can find a description of the syntax in the FileMaker Server documentation PDF fms13_cwp_xml.pdf. I can't find a good link to the syntax online at the moment, but you could search around for it. The basic syntax outlined in the PDF is:
<scheme>://<host>[:<port>]/fmi/xml/<xml_grammar>.xml[?<query string>]
and calling a script would look like:
http://myfmsdomainname.com/fmi/xml/fmresultset.xml?-script=theScriptIwantToFire
My url structure could be off.
Anyway, it sounds like it could be a pain in the ass, but it may be a solution! Good luck!

This is tricky because you will have to communicate with the Webdirect client via FMServer!
Use the FMServer PHP interface for your webpage within the webviewer to communicate to the server. Your web page can either:
1) Directly set a value in the server that your client will monitor
2) If it has to call a script then call a script on the server, your client will still have to monitor a value. e.g. use the php API on your webpage. Setup your database connection (see the API docs in your FMServer folder for an example), then call:
$newperformscript =& $fm->newPerformScriptCommand($layoutName, 'scriptname',$scriptParameters);
$result = $newPerformScript->execute();
Your Webdirect Client will then need to monitor for that change. While Webdirect is on the layout with the webviewer, your client could use the filemaker script step to actively monitor for a change in value via a timer:
Install OnTimer [Interval:secs]
When your client detects that change that you made then perform your action.
Note: You will have to pass an ID of the client to the webviewer, and your webviewer will have to pass that ID back to the server, and your client will have to monitor for that ID so that all clients don't respond to the change. You can pass whichever client ID you choose to use to the webviewer via the URL using GET.
If your server is local then the change will be detected in one second if that is what your Install OnTimer timer is set to.
What you are doing is far from ideal with Filemaker... I suggest that you look into a different UI paradigm if possible.

Related

Is it possible to make a Confluence Search + Redirect Script which works based on URL/tags in the url

I would like to write a plug-in which fetches all the pages addresses/titles (or other identifiers) and redirects the user based on some input string probably a tag in the URL. Does making this sort of plug-in sound feasible? Alternatively, does this plug-in already exist?
I imagine an external script sends the user to "myconfluencewiki.com/redirectme#funnycats". From here the script looks through the wiki and finds "myconfluencewiki.com/fun/funnycats" and sends the user there instead. It does this by finding the title so that it works even if the page was originally on "myconfluencewiki.com/animals/funnycats".
There are some add-ons for redirections: Redirection, Advanced Redirection or HomePage Redirect.
If you still needs to develop your own add-on, the "path converter" module can be handy.
Regards,
Gorka

How to secure querystring/POST details to a third party

I'm basically looking at a security problem between a parent page and an iframe with links to a third party.
I want to send a POST or a GET (doesn't matter which as I can control the other side) to the third party, but not expose any details within it (say a SID or a user token) and have it's HTML content (JS/HTML/Images) loaded into the iframe.
I've looked at server-side redirects, creating a proxy using webclinet/webresponse and am curious to whether there is a good way to do it.
Has anyone ever done this before, or think that the secrity is not possible? Hell, even if I'm barking up the wrong tree on how to solve this.
If anybody has any examples on this it would be greatly appreciated.
Cheers,
Jamie
[Edit] Was thinking I might need to add some more details.
Say I have a parent page: https://mycompany.com/ShowThirdParty.
This has an iframe in it at the moment which will have the content of another component (also owned by me, or another team more specifically)
Basically I'd like to send some credentials to content in the iframe in such a way that the external pages can't read it, the iframe is put into a modal (I've done that) and the iframe has the restricted content with the auhtentication almost seamless and invisible.
I currently have it working as a GET url generated dynamically via JS and then passed into the iframe src parameter, obviously that isn't secure.
I kind of want some kind of server side redirect across a full url, but I don't even think that's possible.
You could try using AJAX and load a PHP script (with any parameters to the script encoded/encrypted) to query the 3rd party page and load the response into the iframe. Not really sure how your code is setup but there should be a way.
It can also be done by POST Method (submit the data to iFrame using POST) as it is HTTPS so the data you send to iFrame is encryped.

Not able to open the web page while runing test from Selenium

When i run my scripts as Junit test case, the browser opens up and also tries to open the provided URL. But, only the header and footer of the website is opened with the message "Cookies and Javascript Required
In order to correctly view this website, you will need Cookies and Javascript enabled on your browser. To set your browser to support these requirements, please visit your browser's help menu for the appropriate instructions."
This makes the entire script to fail as the Web elements are not displayed.
You need to provide some more information:
Does this behavior happen when you access the site normally i.e. without Selenium RC involved?
Which browser are you using?
Have you tried another browser? You can do that by changing the parameters in this line:
seleniumId = new DefaultSelenium( "localhost", 4444, "*iexplore", "http://URL");
(A useful trick is to put garbage in the browser parameter and when you run it, the error message shows all the allowable browser strings.)
Have you tried to enable cookies and Javascript? What happens then?
If you don't want cookies and Javascript enabled normally and you are using FireFox, you can set Selenium RC up to use a special proxy that does allow this (but only for Selenium tests). See here

How does MS-Word open a hyperlink?

I have an MS-Word document with a hyperlink. The hyperlink points at an authentication redirector on my server. When I control-click on the hyperlink, my server logs report that it
does a fetch with IE, then
fetches the redirect url with IE, then
launches the "default browser", which is Firefox in my case, and re-fetches the second (redirect) URL.
What gives? Is this by design?
I noticed this because my auth system is currently dependent on cookies set by the redirector. I have some ideas about using url-based auth for this bit, but I need to know what is motivating Word's behavior first.
I have some guesses but I'm looking for something authoritative (or at least a better-informed guess).
Unfortunately, yes. And they try to blame it on "a limitation of the single sign-on system used by the web server"...
http://support.microsoft.com/kb/899927
Actually, this is a "feature". If the hyperlink is to a Word document, word will attempt to download the document and open it. (You must be thinking it's IE because of the user-agent, but the request is coming from WinInet in the the Word process.)
The mess comes about when the server doesn't respond with a page, but rather responds with a redirect and cookies. Word follows the redirect to see if it's going to get a Word document, and it eventally ends up with an HTML page. It then decides that Firefox should display it, so it launches Firefox with the final redirected URL, (but without any of the cookies that the server sent).
Firefox may end up needing those cookies, if this is an SSO sign-on.
Late addition:
Noticed the same problem. Here with MVC 4 it caused the loss of querystring information.
Word launches the browser only after it receives a Http 200 status.
So I avoided this by checking in the controller whether the request comes from IE7 (representing likely only to be MS Word) and returning a 200 manually.
Then the 'real' browser will re-send the http request and all's well ends well, since from there the request is processed normally and all information is retained in the session with the 'real' browser.
Bit of a workaround, but hey, it works. And it's only for a small amount of requests (in our case).

Best way to store data for Greasemonkey based crawler?

I want to crawl a site with Greasemonkey and wonder if there is a better way to temporarily store values than with GM_setValue.
What I want to do is crawl my contacts in a social network and extract the Twitter URLs from their profile pages.
My current plan is to open each profile in it's own tab, so that it looks more like a normal browsing person (ie css, scrits and images will be loaded by the browser). Then store the Twitter URL with GM_setValue. Once all profile pages have been crawled, create a page using the stored values.
I am not so happy with the storage option, though. Maybe there is a better way?
I have considered inserting the user profiles into the current page so that I could all process them with the same script instance, but I am not sure if XMLHttpRequest looks indistignuishable from normal user initiated requests.
I've had a similar project where I needed to get a whole lot of (invoice line data) from a website, and export it into an accounting database.
You could create a .aspx (or PHP etc) back end, which processes POST data and stores it in a database.
Any data you want from a single page can be stored in a form (hidden using style properties if you want), using field names or id's to identify the data. Then all you need to do is make the form action an .aspx page and submit the form using javascript.
(Alternatively you could add a submit button to the page, allowing you to check the form values before submitting to the database).
I think you should first ask yourself why you want to use Greasemonkey for your particular problem. Greasemonkey was developed as a way to modify one's browsing experience -- not as a web spider. While you might be able to get Greasemonkey to do this using GM_setValue, I think you will find your solution to be kludgy and hard to develop. That, and it will require many manual steps (like opening all of those tabs, clearing the Greasemonkey variables between runs of your script, etc).
Does anything you are doing require the JavaScript on the page to be executed? If so, you may want to consider using Perl and WWW::Mechanize::Plugin::JavaScript. Otherwise, I would recommend that you do all of this in a simple Python script. You will want to take a look at the urllib2 module. For example, take a look at the following code (note that it uses cookielib to support cookies, which you will most likely need if your script requires you to be logged into a site):
import urllib2
import cookielib
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookielib.CookieJar()))
response = opener.open("http://twitter.com/someguy")
responseText = response.read()
Then you can do all of the processing you want using regular expressions.
Have you considered Google Gears? That would give you access to a local SQLite database which you can store large amounts of information in.
The reason for wanting Greasemonkey
is that the page to be crawled does
not really approve of robots.
Greasemonkey seemed like the easiest
way to make the crawler look
legitimate.
Actually tainting your crawler through the browser does not make it that more legitimate. You are still breaking the terms of use of the site! WWW::Mechanize for example is equally well suited to 'spoof' your User Agent String, but that and crawling is, if the site does not allow spiders/crawlers, illegal!
The reason for wanting Greasemonkey is that the page to be crawled does not really approve of robots. Greasemonkey seemed like the easiest way to make the crawler look legitimate.
I think this is the the hardest way imaginable to make a crawler look legitimate. Spoofing a web browser is trivially easy with some basic understanding of HTTP headers.
Also, some sites have heuristics that look for clients that behave like spiders, so simply making requests look like browser doesn't mean the won't know what you are doing.