Generate multiple sheets in one pdf file - tableau-api

I am trying to generate a pdf from a Tableau workbook which has two sheets using the url method:
E.g: https://TableauServer/views/workbook/sheet1?:format=pdf&parameter=value
I am doing this in a program which will issue the url request to the url. The url works fine for one sheet. But the problem is how to generate one pdf file with both sheets in it?

If you first put your two sheets into a single dashboard and then use the URL for the published dashboard (still using the format=pdf parameter), this should work just fine.

We know it's possible because within the Tableau pages itself if you download a PDF it gives you several formatting options, including the option to put all the worksheets in a workbook into a single PDF.
I couldn't find any documentation on it though. What I ended up doing was looking at the network console in the browser (usually F12) when I downloaded the PDF from the browser by clicking the Download button. That showed me the URL end point and the JSON body the server expected in the request payload.
The endpoint URL wasn't too cryptic and ended with "commands/tabsrv/pdf-export-server". The challenge was to take the JSON in the request payload and find the right settings to get it into a single PDF.
This method is a more technical approach and requires very little coding skills; any language that has functions for http calls will work (I use python for it).
If you don't mind doing it outside a browser, tabcmd has lots of functionality to control PDF generation at the command line.

Related

Searching inside JSONs in Chrome devtools

Is there a possibility to searching inside all JSON objects from all available responses in the network tab? Currently it works, but very randomly and isn't much reliable. Sometimes and especially in a smaller responses it's ok but when you have more assets almost always looking for, e.g. specific params value ends unsuccessfully. Do you know any smart solution of that issue? I've checked and first question associated with it has already few years and Google devs still haven't responded.
Example: I have object ID in response body, but cannot find it by search CTRL+F
I think one way is to save all the response in a file (manually or automatically, if possibile by using a browser extension).
After you have stored all the responses in a file you can parse the file and find things inside the file by using a script or just regex.
You can save the answers (as HAR file) manually (I use firefox) by right clicking on a network response inside the developer console panel.
I found that is the same for chrome.
Look here:
https://developers.google.com/web/tools/chrome-devtools/network/reference
I didn't search if there is a way to automatically store all the responses received by a browser. I'm not sure, but I think it isn't possible :/

How to get serverside file uploading progress in Perfect

I'm trying to create a web page using Perfect(perfect.org), Where users will browse and upload files. Can anyone tell me how can I get the progress of file upload?
perfect.org-fileUploads
Refer above link and Do as-usual concept following in HTML-JS-PHP or HTML-JS-JSP or other programming
In other words
you can receive response status in percentage from server-side and display it to client or put loder while uploading the file
Thank you
Before an official solution released from PerfectlySoft Inc. for this feature request, you could try splitting the file into small pieces and upload them one by one, then merge them back to the server - since there is no such an industrial standard to apply, all other web servers either provide different solutions or simply stay away from it.

Applying transformation on html string inside a json response of ajax in moovweb

Hi I am new on moovweb and I got stuck on a requirement to apply transformation logic on ajax response which is coming in a json format containing html, which I have to add on my page.
A sample response
{
"success":true,
"html" :"<div>This can be a big html data</div>"
}
SO basically, I need to apply transformation on that html string. I gone through docs but I did't got anything to handle this kind of scenario.
Is there any way to do it?
If I understand your question correctly then there are several steps that you need to take in order to solve this issue.
Open your Moovweb project using Google Chrome, right click your mouse and choose Inspect Element which will open your Chrome Dev Tools.
Choose the Network Tab at the top of your Chrome Dev Tools which will be the second tab from the left.
Trigger the ajax call on your site and you will see under the Network Tab the response URL of the ajax call. Most of the time it will be in a format like www.yoursite.com/ajax/rest_of_url
Once you have found the ajax response URL then open your main.ts file in your tritium script and insert the following code if your response URL is similar to the format provided above:
match($path){
with(/ajax/){ #or wherever the site files for the ajax response are contained
log("--> importing ajax.ts")
#import ajax.ts
}
}
Now created a file in for your tritium code called ajax.ts. The file will be in the same location as your html.ts and main.ts files. With the code applied above to your main.ts, every time a URL is called that contains /ajax/ then the ajax.ts file will be applied.
Now you can open your new ajax.ts file that you created and start applying your tritium functions to transform the json format containing html to way you need it.

Getting Data from website

So the website constantly changes the data that it displays, and I want to get that data every several seconds and log it in a spreadsheet. The problem is in order to get to the page, I have to have a cookie which I get when I log in. Unfortunately I only know how to program in MATLAB. MATLAB has a function for this, urlread, but it doesn't deal with cookies. What can I do to get to that page? Can anyone help me with this? Point me into a direction where a programing noob like me can succeed please.
You could use wget to download content while using HTTP cookies. I will be using StackOverflow.com as example target. Here are the steps to follow:
1) Obtain the wget command tool. For Mac or Linux, I think it is already available. On Windows, you can get it from the GnuWin32 project or from one of the many other ports (Cygwin, MinGW/MSYS, etc..).
2) Next we need to obtain an authenticated cookie by logging into the website in question. You can use your preferred browser for this.
In Internet Explorer, you can produce it using "File menu > Import and Export > Export Cookies". In Firefox, I used the Cookie Exporter extension to export cookies to text file. For Chrome, there should be similar extensions
Obviously you only need to do this step once, as long as the cookies have not yet expired!
3) Once you locate the cookie file exported, we can use wget to fetch the web page and provide it with this cookie. This of course can be performed from inside MATLAB using the SYSTEM function:
%# fetch page and save it to disk
url = 'http://stackoverflow.com/';
cmd = ['wget --cookies=on --load-cookies=./cookies.txt ' url];
system(cmd, '-echo');
%# process page: I am simply viewing it using embedded browser
web( ['file:///' strrep(fullfile(pwd,'index.html'),'\','/')] )
Parsing the web page is a whole other topic that I will not go into. Once you get the data you seek, you can interact with Excel spreadsheets using the XLSREAD and XLSWRITE functions.
4) Finally you can write this in a function, and make it execute on regular intervals using the TIMER function
Try using the java.net.* classes.
You should be able to use them directly in the MATLAB workspace, as described here: http://www.mathworks.co.uk/help/techdoc/matlab_external/f4863.html
Matlab has built-in functions for web downloading. For http sites, there is webread.m and websave.m. For FTPs, there is mget.m

viewing autocomplete.do files

i was trying to reverse engineer a website ("www.asklaila.com") to find out how their yahoo UI AutoComplete Widget is working. Upon finding the view source of it, i saw it is refering to a file called "/autocomplete.do", i wanted to know what does this autocomplete.do file mean and can i download and open it locally on my machine?
Hope my requisite is legitimate and ethical.
As explained by FileInfo.com, the .do extension represents a server side Java code file that runs on the server and outputs HTML to the response.
Therefore, you cannot download it and view its contents. Any requests to the file will either return the same HTML or an HTTP error if it requires parameters/form fields.