How can I return a text file and an error log from a webpage separately - forms

I have a perl script which when run from the command line generates a text file of data with a specific format for use by another application. The script also prints informational warning messages on stderr. I'm writing a web front end for this. In an ideal world when the user clicks 'submit' on the associated form, a page would be displayed in the browser containing the informational messages, and simultaneously a pop-up would appear allowing the user to save the text file of data to disk. I would like this to work on browsers without javascript enabled, so I think exactly what I want is probably not possible.
Some sites I have seen deal with this kind of thing by displaying the page with the informational messages, and a link to the file to be downloaded. This would seem to mean having to store the files and sorting out some sort of security so that another user cannot download your file (not that this is a big deal for the application in question).
I'm wondering if there is a more elegant way of dealing with this? e.g Is it possible to use multipart messages to somehow achieve returning both pieces of information in one go? Is it possible to pop-up a second window with the informational messages without using javascript? Apologies if these seem like basic questions - my programming knowledge is in the domain of DNA sequence manipulation algorithms rather than web page generation..

If (and only if) the data is quick and easy to generate, do it once for error messages and a second time for download. The link or button of the error-message page would regenerate the results and prompt for download.
This is a bit of a hack since you need to consider what to do if the underlying data changes before the user hits the download link. Be careful to set the header correctly for file download vs normal webpage, eg,
if($submit) {
print header(-type=>'application/octet-stream',
-Content_disposition=>'attachment; filename=foobar.dat');
Gen_Results();
}
To be honest, I'd just use a little javascript anyway since it's a pretty safe assumption now a days. Otherwise, use a "noscript" tag for some alternative.

Related

Searching inside JSONs in Chrome devtools

Is there a possibility to searching inside all JSON objects from all available responses in the network tab? Currently it works, but very randomly and isn't much reliable. Sometimes and especially in a smaller responses it's ok but when you have more assets almost always looking for, e.g. specific params value ends unsuccessfully. Do you know any smart solution of that issue? I've checked and first question associated with it has already few years and Google devs still haven't responded.
Example: I have object ID in response body, but cannot find it by search CTRL+F
I think one way is to save all the response in a file (manually or automatically, if possibile by using a browser extension).
After you have stored all the responses in a file you can parse the file and find things inside the file by using a script or just regex.
You can save the answers (as HAR file) manually (I use firefox) by right clicking on a network response inside the developer console panel.
I found that is the same for chrome.
Look here:
https://developers.google.com/web/tools/chrome-devtools/network/reference
I didn't search if there is a way to automatically store all the responses received by a browser. I'm not sure, but I think it isn't possible :/

Is it possible to hide certain files from "show package contents"?

I have a MacOS app coded in swift, and when someone right clicks > show package contents there is a file that reveals some information I do not want the user to see. Is it at all possible to hide that file?
There's no way to secure data on the client (mac) side. If your program can read something, so can a hacker. You can do 3 things about it:
Make it obfuscated enough to make it annoying to deal with, hoping that bad actors would get discouraged.
Make the reward of reading the sensitive data lower, so there's less incentive to do so
Make the sensitive data be black boxed by a server you control and have secured, and have all the sensitive operations be out-sourced to computation on that secure server.
No, you can't hide files in a meaningful way.
If you name the file starting with a dot (".") they are not shown in the Finder by default, but that's very easy to get around.
Better to encrypt the file and decrypt it in your app. That way nosy users can see the file but can't make any sense out of the contents.

How to extract data from a web site and format to raw text - iPhone Dev

I have been looking around for a while and not found anything useful, also not sure if I have worded the question in the clearest fashion so apologies
I have a section of an app I am building called 'Company News'. The company in question has a news page on their website which displays a title, an excerpt of text and a read more option.
At the minute in the iPhone application I just have a UIWebView which links to that URL, displays an error if no connection is available. However, if my user clicks a story to read the news obviously it opens up a new page, I want to avoid having to build in 'back' and 'forward' buttons and stay away from it looking like a browser within the app.
With that said, I am looking for a way to just extract that data from the website and just display it in my app as raw text. I am not particularly bothered about rich text formatting or anything fancy. I would just like the title and body of text.
Is this possible?
In essence, then, you are looking for an HTML parser.
Assuming the HTML you wish to parse has a predictable format, the approach I would take is to load the HTML via whatever URL loading system you want - e.g. NSURLConnection, ASIHTTPRequest, etc.
Then you will need to parse the raw HTML. I use XPath. It requires that you learn the syntax, but it should work.
For more details about how you might use XPath for parsing HTML, see the second response to this question. You will need to link to libxml2 in your project then use XPath to extract the nodes of interest.
Scraping web pages in this way is fragile, though, because it depends on the structure of a page you don't control and which could be changed unpredictably.

Email sent by SAP Workflow has partial Web hyperlink

I think this might be a simple question but I cannot seem to figure it out.
I have a workflow which simply sends a mail. In the content of the mail I
have a hyperlink going back to our SAP CRM system. I pass some parameters to this hyperlink.
The workflow works fine and the email is sent, however, the hyperlink goes onto the second line of the mail and becomes in active. If I copy the entire hyperlink and paste it in a browser it works.
The issue is I don't want users to copy and paste, I simply want them to click on the hyperlink.
Here is a screen print of what I am talking about
http://img402.imageshack.us/img402/9471/38348167.png
And here is a screen print of the actual email that is sent:
http://img210.imageshack.us/img210/6424/14370746.png
I tried going into transaction PFTC (Task Maintain) I entered my task and opened it up. I went to the tab description
and hit the edit button and I changed the tag column to continuous text but that didnt work, and then I tried extended line
and that too didn't make a difference.
Here is a screen shot of that:
http://img341.imageshack.us/img341/6254/37776438.png
My question is, is there any way to get the hyperlink on one line or even to have it be clickable on 2 lines?
Thanks so much.
From what I can see, that's a limitation of SAPconnect when sending plain-text emails. You could reconfigure the system to send HTML mails, but this would affect EVERY outgoing mail and should be handled extremely cautious. I'd suggest you write a small class to assemble and send the HTML mail and call it from the step. I'd use the BCS for sending the mail - it has an excellent online documentation and comes with several demo programs (BCS_EXAMPLE_*). You could assemble the HTML body using ABAP, although this usually yields rather messy code. Cleaner ways of ding this would be to either put all of the input data into a structure and use a simple transformation or dynamic documents (see for example report DD_ADD_LINK).

How safe is the data being parsed by RTF editors like TinyMCE?

I have a great concern in deploying the TinyMCE editor on a website. Looking at the code parsed by the editor it does a great job, and I leave the HTML button off the toolbar configuration so users can not inject their own source.
However, from what I read in the TinyMCE docs, it claims to degrade nicely to a regular textarea should javascript be disabled on a users browser... and therein lies my concern. If it does revert to a normal textarea, then the user is then able to easily inject their own HTML, and this leaves me with a security concern.
I just pass through data created with TinyMCE, and it is used within another page created by my script, so it poses no security risk to my server. The security concern arises over what malicious data may be passed to another user viewing the generated page.
I know many of you will tell me to just use regexes, or parse this data, but that itself could be a nightmare, as I would be trying to either...
a.) Use regexes to try and clean up the HTML without breaking the generated page,
and it is better to parse the data for that anyway.
b.) Reparsing data that has already been parsed by the RTF editor, which also
would probably end up breaking the generated page.
Anyone with any previous experience with this type of scenario, I would really appreciate a 'heads-up' as to any other risks that using an RTF editor for user data could entail.
I would really like to provide this as a user option, but not if the risks outweigh giving the user using the RTF a chance to take a wack at another user viewing the page that is generated by the script.
My gut feeling is to steer a wide berth around use of the RTF at this point.
Thanks for any direction you can give me with your own experiences.
You cannot have client-side security on the web. You simply can't trust the browser, because it's easy for a malicious user to substitute a replacement browser that does whatever he wants.
If you accept HTML from users (using TinyMCE or through any other method) and display it to other users, you must sanitize or validate the HTML in some way on the server. If you're using Perl, the leading package seems to be HTML::Scrubber (along with various other modules that help you plug it in to various frameworks). I haven't had occasion to try it myself.
The TinyMCE Security page mentions some ways to make it harder for people to submit arbitrary HTML, but you still need server-side checks.
Regex is generally not considered good for parsing HTML
RegEx match open tags except XHTML self-contained tags but I have noted the "perl" tag :)
My advice when taking markup from users is to always parse it through something that can accept mal-formed HTML and return well formed HTML. These parses generally produce something that can be queried and updated with some form of XPath.
In Python there is a module called BeautifulSoup, Ruby has Nokogiri and in ASP.NET there is a project called HtmlAgilityPack that all do this sort of thing. I'm not sure what library perl has, but I'm sure there would be something.