I'm close to getting my homegrown POS app to work with Square, but I'm missing something simple and can't seem to turn up an answer. I'm using FileMaker Go as the app, but I don't think that that is relevant to my current proof-of-concept issue. It may be relevant to other issues later (callbacks).
In my point-of-sale-api settings, I have:
com.filemaker.go.17
for the Bundle ID, and
create-workflow
for the iOS App URL Schemes, which seems to be the first piece of code that Square allows me to save. Any prefixed item such as shortcuts://create-workflow gives an error without description (I'm hoping that Square will trigger a workflow as a test in this POC).
I'm hoping to just trigger safari or workflow/shortcuts with the callback as filemaker go doesn't directly accept the callback response without a helper application - which I'll eventually try.
Any thoughts on what I'm missing?
Thanks tons!
Related
Thanks in advance.
We have to use HERE map's Turn by turn navigation feature in one of our Flutter application, we have added billing in the developer account and have created the necessary keys.
When we try HERE map examples they have provided, we get everything except maneuver instructions that shows the user when to turn right/left/go straight for some distance etc.
I'm new to this and I have no idea how to get this, we never get events on the listener and it only shows updating there, am I missing something ?
this is how it looks right now, Updating...
I think we should be getting the progress here, but we are not getting it here...
_visualNavigator.routeProgressListener = Navigation.RouteProgressListener((routeProgress) { }
Please look into the provided example app. It shows here how to get the maneuver actions.
Your screenshot shows a different app, so make sure everything works with the example app, at first. The app offers to run a simulation mode. This should work. If you run the example app with real GPS updates, you may need to go outside and move to get location updates. This should also work.
If this still does not work, it could either mean that your device has an issue with getting GPS locations. Some iPads, for example, lack support. Or that you have disabled getting location updates. You can cross-check this when trying the positioning_app example from the same repository that shows how to get location updates.
A last point may be to clarify what events you get and what you miss: There are multiple event listeners providing real-time information during guidance - if you have only an issue with maneuvers, then most likely you can solve your issue by following strictly the code of the example app.
Note that previous HERE SDK versions, prior to HERE SDK 4.13.0, only provided empty maneuver instruction texts during guidance when they have been taken from the route instance. Make sure to take this information from the VisualNavigator instead.
For a project that I'm working on that is in Alpha right now, I used to use an invocation ''talk to XXX''. Now that I want to deploy the Action to Beta, I want to change the invocation name/phrase as well. So I changed it to ''talk to YYY'', which is the suggested input field in the simulator as well. But when I want to test this in the simulator, I get the following error message:
Invocation Error:
You cannot use standard Google Assistant features in the Simulator. If you want to try them, use Google Assistant on your phone or other compatible devices.
For some reason, if I ignore the suggested input chip (which says ''talk to YYY'') and type in ''talk to XXX'' (the old invocation phrase) everything still works though. Seems that I'm missing something and Google support can't answer me, does someone know what I can do to successfully deploy to Beta?
When I get that message, it's usually a problem with the Console. A refresh of the page—and sometimes a simple retry—usually does the trick.
You could also try "Change Version" to make sure you're pointed to "Draft".
I have added a 'number_of_members' value to the Customer DocType via customization.
In my application I have tried several ways to update the value. However the value never updates in the webpage. I feel like I'm missing some sort of save or update or commit step.
For example I have tried:
frappe.client.set_value('Customer', '00042', 'number_of_members', 8887)
frappe.set_value('Customer', '00042', 'number_of_members', 8887)
frappe.db.set_value('Customer', '00042', 'number_of_members', 8887)
and also
customer = frappe.get_doc('Customer', '00042')
customer.number_of_members = 8887
customer.save()
In each case I can do something like frappe.get_value, or frappe.get_doc and it shows the value is set to 8887. However it never updates in the web side. This is what makes me think I'm updating some sort of cache or database transaction and I need some way to save it, but have not had any luck.
I am mostly testing this via bench console if that has any bearing on it, but I've tried a couple of the methods in my application code as well.
Relevant documentation:
Frappe Developer API - Document
Frappe Developer API - Database
Turns out the answer is to call frappe.db.commit() after making changes. If someone can point this out in the documentation so I can better understand how I'm missing stuff, I would appreciate it.
I also noticed if you try to Save something in the UI before you send frappe.db.commit() the UI will hang.
When I update tilesets on mapbox, changes don't appear in the iOS app unless I re-install it. There is seemingly documentation on this here: https://docs.mapbox.com/ios/api/maps/5.2.0/Classes/MGLOfflineStorage.html#/c:objc(cs)MGLOfflineStorage(im)setMaximumAmbientCacheSize:withCompletionHandler: but I can't figure out how exactly to implement it. I don't have an MGLOfflineStorage object because I am not worried about offline map storage right now, I just want to refresh the cache in the app. There are good examples of how to do this in android, but not on iOS. Any help is appreciated (preferably in swift)
It appears to be correct to call the methods on the shared MGLOfflineStorage object. The method parameter should be a closure containing any code you want to execute upon completion.
MGLOfflineStorage.shared.invalidateAmbientCache { error in
print("Invalidated")
}
Naturally you should check the error in the usual 'safe' way.
I was googling a lot in order to find a solution for my problems with UI Automation. I found a post that nice summarizes the issues:
There's no way to run tests from the command line.(...)
There's no way to set up or reset state. (...)
Part of the previous problem is that UI Automation has no concept of discrete tests. (...)
There's no way to programmatically retrieve the results of the test run. (...)
source: https://content.pivotal.io/blog/iphone-ui-automation-tests-a-decent-start
Problem no. 3 can be solved with jasmine (https://github.com/pivotal/jasmine-iphone)
How about other problems? Have there been any improvements introduced since that post (July 20, 2010)?
And one more problem: is it true that the only existing method for selecting a particular UI element is adding an accessibility label in the application source code?
While UI Automation has improved since that post was made, the improvements that I've seen have all been related to reliability rather than new functionality.
He brings up good points about some of the issues with using UI Automation for more serious testing. If you read the comments later on, there's a significant amount of discussion about ways to address these issues.
The topic of running tests from the command line is discussed in this question, where a potential solution is hinted at in the Apple Developer Forums. I've not tried this myself.
You can export the results of a test after it is run, which you could parse offline.
Finally, in regards to your last question, you can address UI elements without assigning them an accessibility label. Many common UIKit controls are accessible by default, so you can already target them by name. Otherwise, you can pick out views from their location in the display hierarchy, like in the following example:
var tableView = mainWindow.tableViews()[0];
As always, if there's something missing from the UI Automation tool that is important to you, file an enhancement request so that it might find its way into the next version of the SDK.
Have you tried IMAT? https://code.intuit.com/sf/sfmain/do/viewProject/projects.ginsu . It uses the native javascript sdk that Apple provides and can be triggered via command line or Instruments.
In response to each of your questions:
There's no way to run tests from the command line.(...)
Apple now provides this. With IMAT, you can kick off tests via command line or via Instruments. Before Apple provided the command line interface, we were using AppleScript to bring up Instruments and then kick off the tests - nasty.
There's no way to set up or reset state. (...)
Check out this state diagram: https://code.intuit.com/sf/wiki/do/viewPage/projects.ginsu/wiki/RecoveringFromTestFailures
Part of the previous problem is that UI Automation has no concept of discrete tests. (...)
Agreed. Both IMAT and tuneup.js (https://github.com/alexvollmer/tuneup_js#readme) allow for this.
There's no way to programmatically retrieve the results of the test run. (...)
Reading the trailing plist file is not trivial. IMAT provides a jUnit like report after a test run by reading the plist file and this is picked up by my CI Tool (Teamcity, Jenkins, CruiseControl)
Check out http://lemonjar.com/blog/?p=69
It talks about how to run UIA from the command line
Try to check the element hierarchy, the table can be placed over a UIScrollView.
var tableV = mainWindowTarget.scrollViews()[0].tableViews()[0].scrollToElementWithName("Name of element inside the cell");
the above script will work even the element is in 12th cell(but the name should be exactly the same as mentioned inside the cell)