I need to preface this by saying that I'm coming from years of developing on Windows. This time I'm developing the UI for my macOS app using SwiftUI. The goal is to allow only one instance of the app, depending on where it's started from. For instance, if you copy the app into:
/Users/me/Documents/MyAppCopy/MyApp.app
and to:
/Users/me/Documents/MyApp.app
There should be only two instances of the app allowed, each from those respective locations.
On Windows I would use a named kernel object, say a named event, create it when the app starts and see if it already existed. If so, I will quit the app. Then when the first instance of the app closes, the named event is destroyed by the system automatically.
So I thought to try the same on macOS, but evidently Linux/BSD treats named objects differently.
If I do get the name of the object by calling:
var objName : Bundle.main.bundlePath
IsAnotherInstanceRunning(objName, objName.lengthOfBytes(using: String.Encoding.utf8))
and then using C, remove slashes from it, and use it in a named semaphore:
bool IsAnotherInstanceRunning(const char* pBundlePath, size_t szcbLnPath)
{
bool bResult = false;
char* pName = (char*)malloc(szcbLnPath + 1);
if(pName)
{
memcpy(pName, pBundlePath, szcbLnPath);
pName[szcbLnPath] = 0;
//Remove slashes
int cFnd = '/';
char *cp = strchr(pName, cFnd);
while (cp)
{
*cp = '_';
cp = strchr(cp, cFnd);
}
//Create if doesn't exist, and return an error if it exists
sem_t *sem = sem_open(pName, O_CREAT | O_EXCL, 0644, 0);
if(sem == SEM_FAILED)
{
//Failed, see why
int nErr = errno;
if(nErr == EEXIST)
{
//Already have it
bResult = true;
}
}
free(pName);
}
return bResult;
}
Assuming that the path name isn't too long (this is an issue, but it is irrelevant to this question) - the approach above works, but it has one downside.
If I don't close and remove the name semaphore with:
sem_close(sem);
sem_unlink(pName);
It stays in the system if the instance of my app crashes. Which creates an obvious problem for the code above.
So how would you do this on Linux/macOS?
To prevent/abort the start of an app while another instance is running, you can also use high-level AppKit stuff (we do this in one of our apps and it works reliably).
Use NSWorkspace.runningApplications to get a list of, well, the currently running applications. Check this list, and filter it for the bundleIdentifier of your app. You can then also check the bundleURL to decide whether it's OK to start the current app, which seems to be what you want to do. See also NSRunningApplication.current to get the informations about your current process.
(You can do [otherRunningApplication isEqual:NSRunningApplication.current] to check/filter the current process.)
Do this check in your applicationWillFinishLaunching or applicationDidFinishLaunching method.
Related
I'm trying to get steam app id and steam user id in my Unreal Engine 4 project in the following way:
if (SteamAPI_Init())
{
IOnlineSubsystem* ossBase = IOnlineSubsystem::Get();
FOnlineSubsystemSteam* oss = Cast<FOnlineSubsystemSteam*>(ossBase);
if (!oss) {
printText(TEXT("Steam Subsystem is down!"), FColor::Red.WithAlpha(255));
return;
}
auto SteamID = FString(std::to_string(SteamUser()->GetSteamID().ConvertToUint64()).c_str());
auto AppID = FString(std::to_string(oss->GetSteamAppId()).c_str());
But it is not possible to convert IOnlineSubsystem to FOnlineSubsystemSteam. So what is a correct way to obtain the instance of FOnlineSubsystemSteam?
The solution is to use static_cast:
FOnlineSubsystemSteam* oss = static_cast<FOnlineSubsystemSteam*>(ossBase);
This one works. It seems obvious to use UE4 Cast, but in this case it does not work.
It is possible to differentiate among speakers/users with the Watson-Unity-SDK, as it seems to be able to return an array that identifies which words were spoken by which speakers in a multi-person exchange, but I cannot figure out how to execute it, particularly in the case where I am sending different utterances (spoken by different people) to the Assistant service to get a response accordingly.
The code snippets for parsing Assistant's json output/response as well as OnRecognize and OnRecognizeSpeaker and SpeechRecognitionResult and SpeakerLabelsResult are there, but how do I get Watson to return this from the server when an utterance is recognized and its intent is extracted?
Both OnRecognize and OnRecognizeSpeaker are used only once in the Active property, so they are both called, but only OnRecognize does the Speech-to-Text (transcription) and OnRecognizeSpeaker is never fired...
public bool Active
{
get
{
return _service.IsListening;
}
set
{
if (value && !_service.IsListening)
{
_service.RecognizeModel = (string.IsNullOrEmpty(_recognizeModel) ? "en-US_BroadbandModel" : _recognizeModel);
_service.DetectSilence = true;
_service.EnableWordConfidence = true;
_service.EnableTimestamps = true;
_service.SilenceThreshold = 0.01f;
_service.MaxAlternatives = 0;
_service.EnableInterimResults = true;
_service.OnError = OnError;
_service.InactivityTimeout = -1;
_service.ProfanityFilter = false;
_service.SmartFormatting = true;
_service.SpeakerLabels = false;
_service.WordAlternativesThreshold = null;
_service.StartListening(OnRecognize, OnRecognizeSpeaker);
}
else if (!value && _service.IsListening)
{
_service.StopListening();
}
}
}
Typically, the output of Assistant (i.e. its result) is something like the following:
Response: {"intents":[{"intent":"General_Greetings","confidence":0.9962662220001222}],"entities":[],"input":{"text":"hello eva"},"output":{"generic":[{"response_type":"text","text":"Hey!"}],"text":["Hey!"],"nodes_visited":["node_1_1545671354384"],"log_messages":[]},"context":{"conversation_id":"f922f2f0-0c71-4188-9331-09975f82255a","system":{"initialized":true,"dialog_stack":[{"dialog_node":"root"}],"dialog_turn_counter":1,"dialog_request_counter":1,"_node_output_map":{"node_1_1545671354384":{"0":[0,0,1]}},"branch_exited":true,"branch_exited_reason":"completed"}}}
I have set up intents and entities, and this list is returned by the Assistant service, but I am not sure how to get it to also consider my entities or how to get it to respond accordingly when the STT recognizes different speakers.
I would appreciate some help, particularly how to do this via Unity scripting.
I had the exact same question about dealing with the Assistant's messages, so I looked at the Assistant.OnMessage() method that returns a string like “Response: {0}”, customData[“json”].ToString() plus the JSON output that will be something like this:
[Assistant.OnMessage()][DEBUG] Response: {“intents”:[{“intent”:”General_Greetings”,”confidence”:1}],”entities”:[],”input”:{“text”:”hello”},”output”:{“text”:[“good evening”],”nodes_visited”: etc...}
I personally parse the JSON in order to extract the content from messageResponse.Entities. In the above example, you can see that that the array is empty, but if you are populating it, then that’s where you need to extract the values from and then in your code you can do what you want.
Regarding the different speaker recognition, in the Active property whose code you have included, the _service.StartListening(OnRecognize, OnRecognizeSpeaker) line takes care of both, so perhaps put some Debug.Log statements inside their code blocks to see if they are called or not.
Please set SpeakerLabels to True
_service.SpeakerLabels = true;
all.I want to call a js function to show something in my plugin.This is my code
NPObject* npwindow = NULL;
NPError ret = browser->getvalue(mInstanceForJS, NPNVWindowNPObject, &npwindow);
if (ret != NPERR_NO_ERROR)
return ;
// Get window object.
NPVariant windowVar;
NPIdentifier winID = browser->getstringidentifier("window");
bool bRet = browser->getproperty(mInstanceForJS, npwindow, winID, &windowVar);
if (!bRet)
{
browser->releaseobject(npwindow);
return ;
}
NPObject* window = NPVARIANT_TO_OBJECT(windowVar);
NPVariant voidResponse;
NPVariant elementId;
STRINGZ_TO_NPVARIANT([info UTF8String], elementId);
NPVariant args[] = {elementId};
NPIdentifier funcID= browser->getstringidentifier([funName UTF8String]);
bRet = browser->invoke(mInstanceForJS, window, funcID, args, 1, &voidResponse);
browser->releasevariantvalue(&windowVar);
when called bRet = browser->invoke(mInstanceForJS, window, funcID, args, 1, &voidResponse);,Safari can not responsed.Is there any errors?
npwindow is already the window object; you're effectively querying for "window.window". Granted, I don't know why this wouldn't work, but it seems a little weird.
That's problem #1.
Problem #2 is that you're using STRINGZ_TO_NPVARIANT to store the result of UTF8String. STRINGZ_TO_NPVARIANT doesn't copy the memory, so you could be in trouble if the function wanted to hang onto that string, since the string returned by that will be freed when your autorelease pool cycles. Of course, that could also be a memory leak. Either way, the correct way to pass a string to the browser is to allocate memory for it using NPN_MemAlloc and then copy the string in. Then pass that pointer to the browser. See http://npapi.com/memory for more info.
Problem #3 is that you haven't given us any idea of when you are running this code; it's quite possible that you are trying to run this code too early in the plugin or page lifecycle and thus it may not work because of that.
Then there is another question: What do you mean by "Safari can no responsed"? Forgetting the grammatical error, I'm not sure what you mean by this. Does it hang? is bRet false? Does your computer suddenly get encased in ice, thus halting all processing? If the above is not helpful, please answer these questions and I'll try again.
I'm trying to write a very simple program to replace an existing executable. It should munge its arguments slightly and exec the original program with the new arguments. It's supposed to be invoked automatically and silently by a third-party library.
It runs fine, but it pops up a console window to show the output of the invoked program. I need that console window to not be there. I do not care about the program's output.
My original attempt was set up as a console application, so I thought I could fix this by writing a new Windows GUI app that did the same thing. But it still pops up the console. I assume that the original command is marked as a console application, and so Windows automatically gives it a console window to run in. I also tried replacing my original call to _exec() with a call to system(), just in case. No help.
Does anyone know how I can make this console window go away?
Here's my code:
int APIENTRY _tWinMain(HINSTANCE hInstance,
HINSTANCE hPrevInstance,
char* lpCmdLine,
int nCmdShow)
{
char *argString, *executable;
// argString and executable are retrieved here
std::vector< std::string > newArgs;
// newArgs gets set up with the intended arguments here
char const ** newArgsP = new char const*[newArgs.size() + 1];
for (unsigned int i = 0; i < newArgs.size(); ++i)
{
newArgsP[i] = newArgs[i].c_str();
}
newArgsP[newArgs.size()] = NULL;
int rv = _execv(executable, newArgsP);
if (rv)
{
return -1;
}
}
Use the CreateProcess function instead of execve. For the dwCreationFlags paramter pass the CREATE_NO_WINDOW flag. You will also need to pass the command line as a string as well.
e.g.
STARTUPINFO startInfo = {0};
PROCESS_INFORMATION procInfo;
TCHAR cmdline[] = _T("\"path\\to\\app.exe\" \"arg1\" \"arg2\"");
startInfo.cb = sizeof(startInfo);
if(CreateProcess(_T("path\\to\\app.exe"), cmdline, NULL, NULL, FALSE, CREATE_NO_WINDOW, NULL, NULL, &startInfo, &procInfo))
{
CloseHandle(procInfo.hProcess);
CloseHandle(procInfo.hThread);
}
Aha, I think I found the answer on MSDN, at least if I'm prepared to use .NET. (I don't think I'm really supposed to, but I'll ignore that for now.)
System::String^ command = gcnew System::String(executable);
System::Diagnostics::Process^ myProcess = gcnew Process;
myProcess->StartInfor->FileName = command;
myProcess->StartInfo->UseShellExecute = false; //1
myProcess->StartInfo->CreateNowindow = true; //2
myProcess->Start();
It's those two lines marked //1 and //2 that are important. Both need to be present.
I really don't understand what's going on here, but it seems to work.
You need to create a non-console application (i.e. a Windows GUI app). If all this app does is some processing of files or whatever, you won't need to have a WinMain, register any windows or have a message loop - just write your code as for a console app. Of course, you won't be able to use printf et al. And when you come to execute it, use the exec() family of functions, not system().
I want to write a word addin that does some computations and updates some ui whenever the user types something or moves the current insertion point. From looking at the MSDN docs, I don't see any obvious way such as an TextTyped event on the document or application objects.
Does anyone know if this is possible without polling the document?
Actually there is a way to run some code when a word has been typed, you can use SmartTags, and override the Recognize method, this method will be called whenever a word is type, which means whenever the user typed some text and hit the space, tab, or enter keys.
one problem with this however is that if you change the text using "Range.Text" it will detect it as a word change and call the function so it can cause infinite loops.
Here is some code I used to achieve this:
public class AutoBrandSmartTag : SmartTag
{
Microsoft.Office.Interop.Word.Document cDoc;
Microsoft.Office.Tools.Word.Action act = new Microsoft.Office.Tools.Word.Action("Test Action");
public AutoBrandSmartTag(AutoBrandEngine.AutoBrandEngine _engine, Microsoft.Office.Interop.Word.Document _doc)
: base("AutoBrandTool.com/SmartTag#AutoBrandSmartTag", "AutoBrand SmartTag")
{
this.cDoc = _doc;
this.Actions = new Microsoft.Office.Tools.Word.Action[] { act };
}
protected override void Recognize(string text, Microsoft.Office.Interop.SmartTag.ISmartTagRecognizerSite site,
Microsoft.Office.Interop.SmartTag.ISmartTagTokenList tokenList)
{
if (tokenList.Count < 1)
return;
int start = 0;
int length = 0;
int index = tokenList.Count > 1 ? tokenList.Count - 1 : 1;
ISmartTagToken token = tokenList.get_Item(index);
start = token.Start;
length = token.Length;
}
}
As you've probably discovered, Word has events, but they're for really coarse actions like a document open or a switch to another document. I'm guessing MS did this intentionally to prevent a crappy macro from slowing down typing.
In short, there's no great way to do what you want. A Word MVP confirms that in this thread.