Is the "Actions on Google" Support option 'Contact Us' an effective way of asking for documentation clarification? - actions-on-google

This question concerns the 'Contact Us' option on the support page for Actions on Google. We used it to ask for some clarification regarding a certain part of the Smart Home documentation recently (last Thursday, if memory serves), but have not heard back yet. Additionally, we have not received a confirmation e-mail to indicate receipt of our request. The 'quota' of remaining questions did properly go down (from 15 to 14), which makes us think the request may have been properly processed.
However, we are uncertain whether we will receive any response, or how soon we may expect one. Our request was sent with 'Medium' urgency.
Does anyone have any experience using this support option, who may be able to vouch for its efficacy? Additionally, is there a possibility to view currently 'open' support requests, to see if it is being looked into or has perhaps been closed?

The Contact Us page is generally for specific help related to your project. If your question is more general, like about the documentation, it may be preferable to ask in a broader forum such as Google+ or Stack Overflow, where more individuals with technical experience with the platforms will be able to provide help.
(Such as myself)

Related

How do I disable Microsoft's link rewriting on my MSN account?

Microsoft has recently deployed a mail corrupting feature they call (one has to laugh) "Advanced Threat Detection", which is an unmitigated disaster in almost every imaginable way. It's chief "feature" (at least as I experience it as an end user) is the rewriting of nearly all links in received e-mails, like say
http://www.tandfonline.com/toc/tmam20/10/1
to
https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fwww.tandfonline.com%2ftoc%2ftmam20%2f10%2f1&data=01%7c01%7cbickford%40PITT.EDU%7ca9f7b386fae94ca994bb08d38e3a59bb%7c9ef9f489e0a04eeb87cc3a526112fd0d%7c1&sdata=st79jNKGyGbI%2fcDprP%2fgra%2fTQz7lni5uZCS7a1W83OI%3d
(Really!)
How do I disable this on my msn.com e-mail account?
It takes some doing, but eventually the Outlook Research Team was able to disable this feature.
If you want this disabled, follow the usual links to help (the "?" in the outlook.com UI) and ask for a member of the Outlook Research Team to contact you directly by mail. They will be able to disable the feature within 24 hours.
It turns out that this many users' experience of this feature was indeed a bug:
Only those links which were deemed to be suspicious were supposed to redirect to a warning page [...] however the change affected all the links and our engineering team had to get it corrected.

Strange security issue - why would this happen? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I work for a company which handles some websites that have educational forms prospective students can fill out if they wish to be contacted by a college.
We have attempts coming in from two overseas countries, which are continually filling out and attempting to submit forms using ridiculously bogus information. The only possible outcome if these were to go through would be that the school would try to call them.
I cannot figure out how this could potentially benefit them, in any way shape or form. It seems like it's probably a bot, because they are inserting integers for first name, last name, and email address. I've even considered that some companies I've heard of boost their site traffic unethically by having people (or bots) falsely cause hits on their pages, etc. I don't think that's the case here but I'm not sure.
This isn't my project, but someone mentioned it to me and I found it intriguing. What possible benefit would a bot or hacker have from doing this? Each attempt has been unsuccessful but even if it got through, what's the point? Did someone actually send a bot to try and spam educational websites where all you can do is submit an inquiry to a school? What's going on here, ideas?
My best guess is that it's a bot someone put out there and it's hitting our site by mistake. I don't get it, but I'm not a security ninja. I would love possible scenarios, preferably evidence/fact-based, not opinions if you can't back it up - nothing personal, it's just that I know these are the rules of Stack Overflow.
So if you have a fact-based hypothesis why this may be happening, I would love to understand the how/why...
I don't think that you will ever find any useful answer to your question, because there are lots of reasons that someone may do this. It may be "for fun", increase google ranking, or there are personal "rivalries" between someone else with the company.
Well, you can see at least if the spam comes from automated bot ( if you can change the html/backend code), using the honeypot method, nested somewhere in the form. If the spam stops, it should be an automated spam bot, and most likely you should consider it as a random spam, otherwise someone may have created a spam script for your site and they may do for fun or for other purposes.
P.S. : Do not use ReCaptcha, as some bots can break it.
It's most likely a bot attempting SQL injection.
How does the SQL injection from the "Bobby Tables" XKCD comic work?
The bot isn't trying to insert data into your database. It is trying to maliciously craft responses so that it can retrieve data from your database, or perhaps just delete all of it.
You need to make sure that all your SQL queries are properly escaped to prevent request data from the bot modifying database queries to work in unintended ways.
If you provide some examples of the requests, StackOverflow will be be able to tell you exactly what's going on.

SNDSMTPEMM NOTE Limited to 400 Characters

I am trying to get our iSeries 6.1 machine to send email through our Exchange server. I can do it with SNDDST and with SNDSMTPEMM, but both are very limiting. I need support for basic HTML, and for PDF attachments. I thought I could get them both from SNDSMTPEMM, but now I see that the body parameter for SNDSMTPEMM (NOTE) is limited to 400 characters. Is it possible that this command allows 10 attachments but less than a paragraph of text?
I would like to know if anyone is using this command, and if I am missing something about it that would allow me to create an actual email message.
If indeed I can't put more than 400 characters into the body of an email with this command, I have read about MMAIL and MAILTOOL and I am curious if anyone knows if this message length restriction exists for those as well?
It will be a very hard sell for our main programmer to install any third-party anything to get this working, so I would love to be able to do it with SNDDST of SNDSMTPEMM (or some other built in I haven't found yet).
I don't currently need to be able to send to multiple recipients, but I do need to be able to attach a couple of attachments (where SNDDST fails for me). I also can't use attachments with an *LMSG.
I'm sorry if this is the wrong place for this kind of post - I find it very difficult to find the right place.
The SNDSMTPEMM command is indeed limited to 400 characters in the message body, according to the documentation.
Where I work, we still mainly use MMAIL, which used to be free but now requires a $50 "donation" (and lots of hoops to jump through just to register). It doesn't have that message length limitation. It comes with several commands for ease of use, and a service program for more fine-grained control over how the message is built. Once you download it, you have access to the source, so you can really muck around with it if you have to. (The donation also allows you to download a multitude of other utilities from Easy400.net.)
A better but more expensive option is Bradley Stone's MAILTOOL. It's still competitively priced, as far as commercial IBM midrange software goes. If you go that route, it's probably worth getting the Plus! add-on, which side-steps IBM's native SMTP, a recurring source of headaches. (MMAIL and the basic MAILTOOL rely on native SMTP.)
The best place for this kind of post, at least for now, is the Midrange-L mailing list at midrange.com. When it comes to AS/400, iSeries, and IBM i stuff, that community is currently much more active than Stack Overflow, and they welcome open-ended discussion and "what do you recommend?" posts, which are discouraged here. You can find some discussion on the command you mentioned, and some alternatives, in this thread.

iPhone/iPad Application Development Limitations

Its quite annoying sometimes when you have no authentic sources to confirm if particular tasks can be done using iPhone Available (Public) APIs. Whats the preferred way of finding it out?.
Shall we go through iPhone documented APIs,
Ask senior developers ( which i dont prefer, you should not depend on others too much and theres no surety about their opinions ).
Mail Apple ( by the way they offer only 2 technical calls/yr :) ,
Any other ideas?
what do u people suggest?
Thanks Guys!
The public APIs are documented on developer.apple.com in the iOS Reference Library.
However, the only absolutely authentic source on whether their use is acceptable is to submit an app and have it reviewed. Apple just added a review board if you with to appeal a review ruling, so that may be the new last word (unless you get the executive staff's attention (e.g. SJ)).
If you wish more facts before submitting an app, there are a few sites which show which types of apps are being accepted and rejected, and if so, for what given reason. However past acceptance of a type of app is not a precedent or guarantee for any future policy.
If you wish to try interpreting their rules and guidelines yourself, they are available as part of the Developer iOS Standard Agreement.
The Developer support people who answer technical question usually cannot answer review or approval questions, except to point you at the proper API documentation. (The reason may be that these are often legal, corporate policy or marketing questions, not technical questions.)
You can look at official review process from Apple here:
https://developer.apple.com/appstore/resources/approval/guidelines.html
Step 1 : Check the API.
Step 2 : If can't find an way in the API (may be you are looking at wrong API), use Google to find out whether it can be done or not.
Step 3 : If you can't be sure using Google, then ask SO.
IMO, Asking Apple is never an option.

Ethics of robots.txt [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have a serious question. Is it ever ethical to ignore the presence of a robots.txt file on a website? These are some of the considerations I've got in mind:
If someone puts a web site up they're expecting some visits. Granted, web crawlers are using bandwidth without clicking on ads that may support the site but the site owner is putting their site on the web, right, so how reasonable is it for them to expect that they'll never get visited by a bot?
Some sites apparently use a robots.txt exactly in order to keep their site from being crawled by Google or some other utility that might grab prices and therefore allow people to do price comparisons easily. They have private search engines on the site so they obviously want people to be able to search the site; apparently they just don't want people to be able to easily compare their information with other vendors.
As I said, I'm not trying to be argumentative; I would just like to know if anyone has ever come up with a case where it's ethically permissible to ignore the presence of a robots.txt file? I cannot think of a case where it's permissible to ignore the robots.txt mainly because people (or businesses) are paying money to put up their web sites so they should be able to tell the Googles/Yahoos/Other SE's of the world that they don't want to be on their indices.
To put this discussion in context, I'd like to create a price comparison website and one of the major vendors has a robots.txt that basically prevents anyone from grabbing their prices. I'd like to be able to get their information but, as I said, I can't justify simply ignoring the wishes of the site owner.
I have seen some very sharp discussion here and that's why I would like to hear the opinions of developers that follow Stack Overflow.
By the way, there is some discussion of this topic on a Hacker News question but they seem to mainly focus on the legal aspects of this.
Arguments:
A robots.txt file is an implied license, especially since you are aware of it. Thus, continuing to scrape their site could be seen as unauthorized access (i.e., hacking). Sucks, but arguments like this have been made in other legal cases recently (not directly related to robots.txt, but in relation to other "passive controls".)
Grabbing prices violates no copyright law, including DMCA, since copyright does not include factual information, only creative.
Ethically, you should not grab prices because the vendor should have the ability to change prices without worrying about being accused of a bait/switch by people coming from your site.
Have you taken the high road, explaining the site to them and saying you'd love to include them in your list of vendors? Maybe they will love the idea and actually expose the data in a way that is easy for you to consume and less resource-intensive for them to produce.
There are no laws written directly about robots.txt because netiquette is generally followed. Don't be one of the "bad guys."
Some people filter robots because they use URL links to perform "actions" like adding things to carts, and robots leave them with massive numbers of abandoned shopping carts in their database.
Some people filter robots because they have exclusive prices that they can't advertise openly based on agreements with their vendors. You could be putting them in a bad position by exposing those prices on your site.
In this economy, if a company doesn't want to do everything possible to advertise themselves, it's their own fault that you don't include them.
The other use of robots.txt is to help protect web spiders from themselves. It's relatively easy for a web spider to get mired in an infinitely deep forest of links, and a properly constructed robots.txt file will tell the spider that "you don't need to go here".
Many people have tried to build businesses off building "price comparison" engines that scraped major sites.
Once you start getting any sort of traffic/revenue to speak of, you will receive a cease and desist. It's happened to dozens, if not hundreds of projects. I even worked on a small project that received a C&D from Craigslist.
You know how they say "It's easier to ask forgiveness than it is to get permission"? It doesn't hold true with page scraping. Get permission, or you will be hearing from their lawyers.
If you're lucky, it'll be early on, when you've got nothing to lose. If it's late, you may lose your business and all your work overnight, with a single letter.
Getting permission shouldn't be hard. Unless you're doing something sneaky, you're likely going to drive them additional traffic. Hell, once your product takes off, sites may be begging you, or even paying you to add their data.
One reason we allow robots to dig through the web without complaint is that we have a way to stop them if we want to. Protects both sides.
Remember the uproar when Cuil's robots were accused of going over-the-top, apparently acting like a DoS attack in some cases and using up bandwidth allowances of some small sites?
If too many people violate robots.txt we might get something worse.
"No" means "no".
To answer the narrow question, for the price comparison website you're probably best grabbing the price in real time, rather then scrapping the database in advance. Hard to imagine that being a problem.
An interesting IRL version of story involving The Harvard Coop:
Coop Calls Cops On ISBN Copiers.
Short answer: No.
On the narrow issue: If a seller says that their prices are secret, I think you have to respect that. I'd contact them and ask if they really don't want price comparison engines like yours to include them, or if the "no trespassing" sign is for technical reasons. If the latter, perhaps they'll provide you with an alternative. If the former, then I'd say too bad, they don't get included, they lose some business, and it's their problem.
Tangential rant: Personally, I get pretty annoyed with companies that make me jump through hoops to find out the price of their products, places that make me call and talk to a salesman so he can give me a hard-sell pitch, or worse, make me give them my phone number so their salesman can call and harass me. I figure that if they're afraid to tell me the price, it probably means that it's too high.
In general: A robots.txt file is like a "No Trespassing" sign. It's the owner's right to say who is allowed on their property. If you think their reasons are dumb, you can politely suggest they take the sign down. But you don't have the right to disregard their wishes. If someone puts a No Trespassing sign on his yard, and I say, "Hey, I just want to take a quick short cut, what's the big deal?" -- Maybe I'm stepping on his prized Bulgarian violet bulbs and destroying a valuable investment. Maybe I'm crossing his people's sacred burial ground and offending their religious sensibilities. Or maybe he's just an ornery jerk. But it's still his property and his right. Oh, and if I fall into the dangerous sinkhole after ignoring the No Trespassing sign, who's to blame? (In America, I could probably still sue him for all he's worth despite the fact that he warned me, but is that right?)
I'm showing some ignorance here, but I always thought a bot was something only sent out by a search engine. Like Google or Yahoo.
Thus, if you wrote an application that searched content on the internet, I wouldn't consider that a search engine bot, which to my knowledge is what robots.txt is trying to block.
But this may just be selective ignorance, because I might do it until the webmaster of that site contacted me and asked me to stop :)
If people make it available to public access, they shouldn't try to put limits on it. Adding a robots.txt file to your site is the equivalent to putting a sign on your lawn that says "Please don't look at me."