About difference of selector's interpretation when launching or not developer tool with Edge - dom

What I want to know
Whether there is a difference in interpretation between selector and DOM on / off of developer tool.
If you have good books, good literature (web-page) about other mechanisms and behaviors of the browser, please post me.
Issue
At Edge, it only works as expected when launching the developer tool,
then, closing the developer tool did not work as expected.
Confirmed
I found two cases below and confirmed.
Case where console.log () remained.
Case where the cache was left.
No matter what solution I tried, my problem was not solved.
I thought that it was peculiar to Edge, I confirmed the Web page of Microsoft's developer tools for just in case,
I can not obtain particularly powerful information.
Final modification
Below is the code of .js.
There was a deficiency in the selector I used for jquery, so I corrected it and confirmed the expected behavior.
The direct cause was forgotten in parentheses.
$ ("input [value = hoge"). parent (). show ();
$ ("input [value = hoge]"). parent (). show ();
 History of correction
Since it is specific only when the developer tool is off in the first place,
Basically I chased with alert ().
Thanks.

Related

OS11 replacement for RadioInfo?

Is there a replacement for the RadioInfo that was removed as of OS11? (com.android.settings/.RadioInfo)
If not, where else can this information be found in UI? Or adb? (preferably without rooting the device)
Our team relies heavily on the use of the data in RadioInfo for QA testing, especially since it works on nearly all Android devices (rather than being OEM-dependent like engineering short codes).
It also offered ideal granularity in network selection, moreso than the basic Settings UI.
Also, why was it removed? I looked back about 11 months in logs and didn't see a single mention of it (though maybe I'm looking in the wrong place - if there's a comment on it somewhere, please do share the link).
Looks like it just got moved.
10 and earlier:
com.android.settings/.RadioInfo
as of 11:
com.android.phone/.settings.RadioInfo

Can someone help me out with changing my invocation phrase?

For a project that I'm working on that is in Alpha right now, I used to use an invocation ''talk to XXX''. Now that I want to deploy the Action to Beta, I want to change the invocation name/phrase as well. So I changed it to ''talk to YYY'', which is the suggested input field in the simulator as well. But when I want to test this in the simulator, I get the following error message:
Invocation Error:
You cannot use standard Google Assistant features in the Simulator. If you want to try them, use Google Assistant on your phone or other compatible devices.
For some reason, if I ignore the suggested input chip (which says ''talk to YYY'') and type in ''talk to XXX'' (the old invocation phrase) everything still works though. Seems that I'm missing something and Google support can't answer me, does someone know what I can do to successfully deploy to Beta?
When I get that message, it's usually a problem with the Console. A refresh of the page—and sometimes a simple retry—usually does the trick.
You could also try "Change Version" to make sure you're pointed to "Draft".

Making my agent with a difficult name, easier to invoke?

I'm creating an agent that interacts with an API I created, Auroras.live. However I always have troubles invoking the test version of the agent from my Google Home.
I really have to stress the "S" in Auroras, and I also have to say "dot", otherwise Google Home interprets it (I think) as Auroras Live, or Aurora.live, without the dot or "S"
This is definitely going to be a problem for others too, as they might not know to pronounce the dot, or forget to stress the "S", and as a result will get frustrated & not use my agent.
While filling out the app details, I tried using different invocations (such as "Talk to Auroras dot live" and "Speak to Aurora Live"), but it wouldn't let me do it, because I needed to use the exact title of my app.
What should I do? Should I (or can I) submit it as an easier to pronounce name (like "the aurora app")? Can I somehow tell Google to accept it with or without the "S" / dot? Any suggestions welcomed.
This is definitely a case where you would want your invocation name to be (slightly) different than your display name. I would list "Auroras Live" as your display name and "Aurora live" as the invocation name.
As part of the testing instructions, explain the problems you're seeing to the tester and request that both invocations be allowed.
If you want to clearly associate it with the auroras.live website, you could also mention that in the testing instructions (to include the dot), but you should probably also consider including a link to the site from the description and possibly from the action itself.

Strange permissions in several apps

For quite a while I'm collecting app details. My site meanwhile covers about 13k apps, and has a list of permissions with explanations. A few weeks ago I started recording "unknown permissions" as well (i.e. permissions apps are requesting, but which are not covered by my list). Analyzing that now (mostly app specific (<package_name>.*) or manufacturer specific (e.g. com.sec.* for Samsung, com.htc., com.sonyericsson. etc) permissions), I found a list of very strange permissions requested quite frequently – which were nowhere documented.
Are there any insights here? I will list them ordered by most used and with a comment of what I figured up to now – and hope you can give me some additional details on at least a few of them:
WRITE_INTERNAL_STORAGE: wrongly deduced from WRITE_EXTERNAL_STORAGE?
ACCESS_GPS: pre-Android-1.0 and long obsolete
ACCESS_LOCATION: pre-Android-1.0 and long obsolete
STORAGE: also a remains from pre-1.0 – or picked the permission group instead ??? (used by e.g. com.yuilop)
READ_INTERNAL_STORAGE: wrongly deduced from READ_EXTERNAL_STORAGE?
NETWORK: also a remains from pre-1.0 – or picked the permission group instead ??? (used by e.g.: com.koushikdutta.backup)
PERMISSION_NAME: copy-pasta?
LOCATION: no such permission (or also pre-1.0 – or picked the permission group instead?)
SYSTEM_OVERLAY_WINDOW: that's what SYSTEM_ALERT_WINDOW permits: using overlays ;)
RECORD_VIDEO: wrongly deduced from RECORD_AUDIO, and should probably be CAMERA? Also see here
ACCESS_COURSE_LOCATION: definitely a typo. And ACCESS_COARSE_LOCATION (which was meant) most likely not needed, if noone noticed :)
READ_APN_SETTINGS: wrongly deduced from WRITE_APN_SETTINGS ???
BROADCAST_PACKAGE_REPLACED: probably wrongly deduced from BROADCAST_PACKAGE_ADDED and BROADCAST_PACKAGE_REMOVED ???
GET_CLIPS / READ_CLIPS / WRITE_CLIPS: ??? obviously refers to clipboard actions, but I've never heard of those perms. Developers manual on copy paste does not mention any permission for this. Despite of that, a screenshot from AppOps found in this blog article clearly shows a „Read clipboard“ permission.
WRITE_LOGS: probably wrongly deduced from READ_LOGS
BROADCAST_PACKAGE_CHANGED: probably wrongly deduced from BROADCAST_PACKAGE_ADDED and BROADCAST_PACKAGE_REMOVED ???
CHANGE_WIFI_AP_STATE: ???
There are several more (over 100 altogether), but these are the ones used by multiple apps. Note that in the Manifests of affected apps, they are prefixed by android.permission. (e.g. android.permission.WRITE_INTERNAL_STORAGE). Any clues?
Where do people get those ideas from, when a search for the explicite name doesn't turn up anything, I wonder … Most confusing is that several of the above are even suggested here at SO to fix issues – despite of being mentioned in other posts as definitely not existing.
EDIT: Being asked to name some example apps:
*_INTERNAL_STORAGE:
RaspManager
QR Code Scanner
SYSTEM_OVERLAY_WINDOW:
GPS HUD
WeCal
READ_APN_SETTINGS:
MyBackup
Contacts+
Just to ensure those apps are not declaring those permissions, I've picked some .apk files (MyBackup, GPS HUD) and ran aapt d badging against them. Found no single declaration, all only named by uses-permission:.
PS: Sources I usually consult for finding details on permissions include, next to a Google search, a.o. Github, Android Source, Android Cross Reference, Android Developers, and several more. I had no luck with the above.

how can I improve iPhone UI Automation?

I was googling a lot in order to find a solution for my problems with UI Automation. I found a post that nice summarizes the issues:
There's no way to run tests from the command line.(...)
There's no way to set up or reset state. (...)
Part of the previous problem is that UI Automation has no concept of discrete tests. (...)
There's no way to programmatically retrieve the results of the test run. (...)
source: https://content.pivotal.io/blog/iphone-ui-automation-tests-a-decent-start
Problem no. 3 can be solved with jasmine (https://github.com/pivotal/jasmine-iphone)
How about other problems? Have there been any improvements introduced since that post (July 20, 2010)?
And one more problem: is it true that the only existing method for selecting a particular UI element is adding an accessibility label in the application source code?
While UI Automation has improved since that post was made, the improvements that I've seen have all been related to reliability rather than new functionality.
He brings up good points about some of the issues with using UI Automation for more serious testing. If you read the comments later on, there's a significant amount of discussion about ways to address these issues.
The topic of running tests from the command line is discussed in this question, where a potential solution is hinted at in the Apple Developer Forums. I've not tried this myself.
You can export the results of a test after it is run, which you could parse offline.
Finally, in regards to your last question, you can address UI elements without assigning them an accessibility label. Many common UIKit controls are accessible by default, so you can already target them by name. Otherwise, you can pick out views from their location in the display hierarchy, like in the following example:
var tableView = mainWindow.tableViews()[0];
As always, if there's something missing from the UI Automation tool that is important to you, file an enhancement request so that it might find its way into the next version of the SDK.
Have you tried IMAT? https://code.intuit.com/sf/sfmain/do/viewProject/projects.ginsu . It uses the native javascript sdk that Apple provides and can be triggered via command line or Instruments.
In response to each of your questions:
There's no way to run tests from the command line.(...)
Apple now provides this. With IMAT, you can kick off tests via command line or via Instruments. Before Apple provided the command line interface, we were using AppleScript to bring up Instruments and then kick off the tests - nasty.
There's no way to set up or reset state. (...)
Check out this state diagram: https://code.intuit.com/sf/wiki/do/viewPage/projects.ginsu/wiki/RecoveringFromTestFailures
Part of the previous problem is that UI Automation has no concept of discrete tests. (...)
Agreed. Both IMAT and tuneup.js (https://github.com/alexvollmer/tuneup_js#readme) allow for this.
There's no way to programmatically retrieve the results of the test run. (...)
Reading the trailing plist file is not trivial. IMAT provides a jUnit like report after a test run by reading the plist file and this is picked up by my CI Tool (Teamcity, Jenkins, CruiseControl)
Check out http://lemonjar.com/blog/?p=69
It talks about how to run UIA from the command line
Try to check the element hierarchy, the table can be placed over a UIScrollView.
var tableV = mainWindowTarget.scrollViews()[0].tableViews()[0].scrollToElementWithName("Name of element inside the cell");
the above script will work even the element is in 12th cell(but the name should be exactly the same as mentioned inside the cell)