Interacting with Siri via the command line in macOS - macos-sierra

I use Siri on my phone and watch to create reminders on the go. When I'm in the office I don't want to disturb the quiet by using Siri, so I usually use an Alfred workflow that is integrated with the Reminders app, or use the Reminders app directly.
However, both have a rather clunky interface, and it would be much easier if I could just type at the command line:
$ siri "remind me to check stack overflow for responses to my question in 15 minutes"
macOS Sierra has introduced Siri to the desktop, but so far I have been unable to find a way to interact with Siri in any way other than literally talking out loud, and Spotlight does not match Siri with natural language comprehension.
Apple has announced the Siri SDK, but it seems primarily related to adding functionality to Siri, not for exposing the Siri API.
Does Apple expose any kind of API to Siri on macOS such that one could make Siri requests via the command line, system call, or other executable?
Note: I understand that this question could conceivably find a better home at Ask Different, Super User, or Unix & Linux. In the end, I decided that some programmatic integration with an API or SDK was the most probable solution, and thus Stack Overflow seemed the most appropriate place to post. If mods disagree, please do migrate to whichever community is best.

This isnt from the command line, but closer... and I haven't tested it, but in High Sierra there's a way to use Accessibility settings to enable you to use your keyboard to ask Siri questions.
How to enable it:
System Preferences > Accessibility > Siri.
Click in the box beside Enable Type to Siri so that a tick appears.
Now when you trigger Siri, a keyboard will appear into which you can type your query.
Snagged from here: https://www.macworld.co.uk/news/mac-software/how-use-siri-on-mac-3536158/

I was wanting the same feature today - I got it working but could be improved upon: https://youtu.be/VRLGCRrReog
TLDR is use LoopBack by Rogue Amoeba and change Siri’s input Mic to Loopback. Then Use the Say command in Terminal for example.

As mentioned by Brad Parks, you can enable 'Type to Siri' from the Accessibility menu. You can use this to interact with Siri using simulated keypresses.
I've created a simple Python script which behaves like requested in your question when invoked from the command line.
The script uses the keyboard Python module.
#!/usr/bin/python
import sys
import time
import keyboard
def trigger_siri():
keyboard.press('command+space')
time.sleep(0.3)
keyboard.release('command+space')
time.sleep(0.2) # Wait for Siri to load
if __name__=='__main__':
trigger_siri()
keyboard.write(sys.argv[1])
keyboard.send('enter')

Cliclick is a great (and free) tool for triggering mouse and keyboard events via the command line. After installing Cliclick, I enabled "Type to Siri" (System Preferences > Accessibility > Siri). I also changed Siri's keyboard shortcut to "Press Fn (Function) Space" (System Preferences > Siri). The other keyboard shortcut options require you to "Hold" a key, which can be done, but it makes things a bit trickier.
With all that done, I can invoke Siri from the terminal with something like this:
$ cliclick kd:fn kp:space ku:fn w:250 t:"turn on the living room lights" kp:return
Going a step further, if you are familiar with terminal aliases and functions, you can create a "siricli" function:
siricli(){
cliclick kd:fn kp:space ku:fn w:250 t:"$1" kp:return
}
Open a new terminal window after adding that function, and now you can invoke Siri from the command line like this:
siricli "turn on the living room lights"

Related

On macOS Monterey, cannot create shortcut actions with Catalyst

We are trying to create shortcut actions with Catalyst.
Our app is already available on Mac, and we previously integrated the intents framework on iOS. So according to the WWDC21 "Meet Shortcuts on macOS" presentation, "it's likely that [we] have compiled out [our] Intents integration in the process of coming to Mac". So, it's no surprise that we cannot create shortcut actions for Mac in our app with Catalyst.
The WWDC presentation suggests to "make sure to audit your code to re-enable this functionality when running on macOS Monterey." We do not understand what we need to do based on this suggestion.
What we tried so far :
we managed to create shortcut actions for mac with Catalyst, in the app available at https://github.com/mralexhay/ShortcutsExample. So, the problem does come from our app.
we managed to create shortcut actions for iOS in our app
we tried to create a fresh intent extension in our app, but the shortcut actions are still available only on iOS, not on Mac.
Has anyone found a solution in a similar situation ?
When creating a shortcut action, Shortcuts get mixed up with app identifiers. You therefore need to delete all the compiled versions of your app.
I'm having a similar problem with this example "Meet Shortcuts on macOS", I haven't done anything with shortcuts before, but I have with AppleScripts. I have managed to sort out a couple of problem due to beta changes, but I end up with this method
let task = createTask(name: title, due: dueDate)
Which doesn't exist, worst still its suppose to return a Task to set to the CreateTaskIntentResponse.task property, but Task is already defined. So I can't really redefine it and besides it seems like it should be a generated type based on all the intent info I supplied.

Keyboard Filter Driver. Scan Code -> VK_??? (OEM Specific)

Preface (Imaginary. So someone does not ask 'What are you trying to do?):
I have a Win32 C++ application.
This application wants to know when the user wants to open the start menu via Ctrl+Esc
Of course, Ctrl+Esc is fired from the operating system so the application never see's it.
I have looked at Windows Virtual Keys.
I see that there are plenty of OEM specific VK's
(0x92-0x96,0xE0,0xE9-0xF5,..)
So my thought was:
Keyboard Filter Driver.
When my application has the focus it tells the Keyboard Filter Driver.
When my driver sees the Ctrl is down and an Esc down occurs (And my application has focus):
-- Swallow the Esc and replace it with a scan code that will produce say a VK_0x92 (OEM Specific).
Since I have swallowed the Esc the operating system will never see Ctr+Esc
My application will then see the VK_0x92 and know the user wants to open the start menu and perform some action.
My question is how do I 'muck' the input within my driver (KEYBOARD_INPUT_DATA) in order for a say
VK_0x92 to appear within my application?
Thanks in advance for any pointers.
It is all about the Keyboard Layout.
What I needed to do was not supported by Microsoft Keyboard Layout Creator (MKLC).
See: Keyboard Layout Samples.
I found the samples to be very old and hard to read through. Clearly the US and German keyboard samples are not the most recent.
I wrote a program to create Visual Studio projects for keyboard layouts by pointing to a specific layout (I.e, KBDUS.dll for example). I generate the source code, .vxcproj, ... I then make my modifications and build it.
Installing the layout is another can of worms entirely. I have asked in several places for Microsoft to release the source code for the CustomAction Dll that is contained within the MKLC generated .MSI to no avail.

Launch my app by hotkeys and pass it arguments from Finder (macOS)

I'm only starting macOS programming. Did some tutorials, reading docs at developers.apple.com. Trying to implement a simple(?) thing, but can't seem to get the whole picture for now.
I want to be able to launch my app by pressing some hot keys combination. The app itself is just a window with a text field that has a list of selected files in Finder (if any).
Naturally, I'm not asking for a concrete implementation. But some hints and directions on the general structure, or on what concepts and classes to inspect would be very helpful.
macOS 10.13.4, Xcode 9.3.1, Swift 4
Probably the best approach is to implement a "service". See the Services Implementation Guide.
A service is a tool that appears in the Services submenu of the application menu and in contextual menus. It can be configured to be invoked by a hot key. The active application at the time the service is invoked cooperates by providing the current selection.

Assign command to the central soft button within javaMe

I have the mobile javaMe application that has been working on Nokia Phones. However, now I'm porting it to Samsung 5611, and I've faced with such a problem: no command is assigned on the central soft button, all of them are contained in the right-button menu. When the same midlet was launched on Nokia 3110c, one command was placed on central button, other ones (if >=2) were grouped into the options menu.
I tried Item.setDefaultCommand (no effect) and Display.getInstance().setThirdSoftButton(true) (such method not supported in SDK 3.4). Also I tried to change the type of one command to Ok or Screen, and change the priority, everything is without success.
Thanks in advance. Any idea will be helpful.
Sadly there's no way for the developer to decide exactly on which softbuttons the commands belong. It is the individual device that decides. Some devices has two softbuttons, and some has three.
You can fiddle a bit with priorities, but you still can't force commands to specific softbuttons.
That's high-level GUI (Form) for you.
If you want to have control of such things, you need to go with low-level GUI (Canvas / GameCanvas). Nowadays there are several APIs you can use to create Form-like low-level GUI. Check out LWUIT for example, which I imagine makes it easy for you to port your high-level code into low-level.
But even when using low-level coding, you have to be aware of different devices having different keycodes for the softbuttons.

Voice coding in Emacs on Mac OS X

I would like to be able to write code by voice recognition and him currently using Aquamacs 2.4 and Dragon Dictate 2 on Mac OS X 10.6.8. Does anybody know if this is possible and if so how? I've seen shorttalk, emacs listen, and voice code but they only work on windows machines with Dragon Naturally Speaking.
Any leads would be much appreciated.
Also I am writing in R via ESS.
Have a look at this presentation by Tavis Rudd : http://www.youtube.com/watch?v=8SkdfdXWYaI
He runs Dragon Naturally Speaking inside a Windows VM, because the Windows version can be scripted with Python. Then the VM communicates with Emacs on his local machine.
He says in the presentation he will open source his code, but it doesn't seem to be there yet on his Github.
So yes, it's possible, but at this point there is no out of the box solution. If you really want this, prepare to invests weeks or months to get to a properly working setup.
I recently released the coding-by-voice solution I created to solve my own RSI issues. It can be found here: http://www.voicecode.io
I use it mostly for coding in Sublime Text and Xcode, but it works great with emacs or vim as well. The great thing about this solution is that all commands can be chained into "command phrases" so you don't have to pause between every individual command like you do with other voice command solutions.
It has builtin support for all standard variable-name formats (snake case, camel case, etc), has builtin commands for every permutation of keyboard shortcuts (ie command-shift-5, command-option-shift-T, and so on), has cursor movement commands, app switching commands, window switching commands, commands for symbol combos like "=>", "||", ">=", etc, and tons more. Plus it is very easy to add your own custom commands as well.