Who or what is '_mbsetupuser'? - macos-sierra

In all my terminal sessions on OS X or macOS, if I type who, in addition to the expected users (all me in various windows) I also always see
_mbsetupuser console ...
Just who is that, and what is he/she doing?

As the name suggests, it is a process associated with setup, which, in this case is running as a result of the upgrade from 10.11.x to 10.[11|12].x+y or from 10.12.x to 10.12.x+y (and may also appear in upgrading from older versions to 10.11.x). This process does not appear after an update to 10.13.x.
Unfortunately, though the "About This Mac" dialog may say that your version is 10.[11|12].x+y, you are in fact effectively between versions, and will get all kinds of odd behavior (repeated requests to unlock the Keychain, connectivity problems, wifi issues, mail synchronization failures, process crashes, etc.) until you complete the installation process, which you should be able to accomplish with a reboot.

Related

How to interact with Openmodelica embedded opc-ua server

I have built and started an OPC UA embedded Openmodelica server with the BouncingBall model like so:
$ omc +s path/to/model
$ ./BouncingBall -embeddedServer=opc-ua -rt=1
Now I'm trying to interact with it using an OPCUA client. However, I don't understand how I'm supposed to interact with the server properly. As far as I know this is undocumented.
The most promising approach seems to be to set enableStopTime to false and run to true. Then the simulation seems to run indefinitely and the values seem to make sense. It seems I'm only able to extract the values in real time however. While running, when I set run to false it seems that the server enters an erroneous state and it refuses to give any values back.
If I restart the executable and instead set step to true nothing seems to change and after trying to set step to true a second time the server becomes unresponsive. The -rt=1 option doesn't seem to matter. Seems like it enters the same state as above (1).
(After restart) If I leave enableStopTime to be true and set run to true the simulation runs to stop and then the server quits with message The simulation finished successfully. Maybe this is intended. Kind of seems odd. Would make sense to be able to restart the simulation or trigger it with new options.
What I would hope to be able to do: Start and stop simulation as well as rewind to a certain point to check the value at that point. It seems to me that the API "affords" this functionality and it could probably be provided by hackingly wrapping the executable and API. Are the above bugs or intended? What is the intended way to interact with an OPC UA server in these cases?
The OpenModelica compiler version is 1.16.0~1-g84b4a71
Please try the latest nightly build
It includes the following commit.
That might solve it. I believe things worked without subscriptions before, since I could never reproduce this without them.
(By the way, do people go on our git commit feed and try to reproduce bugs fixed in the last 24 hours; we quite often get questions that were just recently fixed)

Can I open and run from multiple command line prompts in the same directory?

I want to open two command line prompts (I am using CMDer) from the same directory and run different commands at the same time.
Would those two commands interrupt each other?
One is for compiling a web application I am building (takes like 7 minutes to compile), and the other is to see the history of the commands I ran (this one should be done quickly).
Thank you!
Assuming that CMDer does nothing else than to issue the same commands to the operating system as a standard cmd.exe console would do, then the answer is a clear "Yes, they do interfere, but it depends" :D
Break down:
The first part "opening multiple consoles" is certainly possible. You can open up N console windows and in each of them switch to the same directory without any problems (Except maybe RAM restrictions).
The second part "run commands which do or do not interfere" is the tricky part. If your idea is that a console window presents you something like an isolated environment where you can do things as you like and if you close the window everything is back to normal as if you never ever touched anything (think of a virtual machine snapshot which is lost/reverted when closing the VM) - then the answer is: This is not the case. There will be cross console effects observable.
Think about deleting a file in one console window and then opening this file in a second console window: It would not be very intuitive if the file would not have been vanished in the second console window as well.
However, sometimes there are delays until changes to the file system are visible to another console window. It could be, that you delete the file in one console and make a dir where the file is sitting in another console and still see that file in the listing. But if you try to access it, the operating system will certainly quit with an error message of the kind "File not found".
Generally you should consider a console window to be a "View" on your system. If you do something in one window, the effect will be present in the other, because you changed the underlying system which exists only once (the system is the "Model" - as in "Model-View-Controller Design Pattern" you may have heard of).
An exception to this might be changes to the environment variables. These are copied from the current state when a console window is started. And if you change the value of such a variable, the other console windows will stay unaffected.
So, in your scenario, if you let a build/compile operation run and during this process some files on your file system are created, read (locked), altered or deleted then this would be a possible conflicting situation if the other console window tries to access the same files. It will be a so called "race condition", that is, a non-deterministic process, which state of a file will be actual to the second console window (or both, if the second one also changes files which the first one wants to work with).
If there is no interference on a file level (reading the same files is allowed, writing to the same file is not), then there should be no problem of letting both tasks run at the same time.
However, on a very detailed view, both processes would interfere in that they need the same limited but vastly available CPU and RAM resources of your system. This should not pose any problems with the todays PC computing power, considering features like X separate cores, 16GB of RAM, Terabytes of hard drive storage or fast SSDs, and so on.
Unless there is a very demanding, highly parallelizable, high priority task to be considered, which eats up 98% CPU time, for example. Then there might be a considerable slow down impact on other processes.
Normally, the operating system's scheduler does a good job on giving each user-process enough CPU time to finish as quickly as possible, while still presenting a responsive mouse cursor, playing some music in the background, allowing a Chrome running with more than 2 tabs ;) and uploading the newest telemetry data to some servers on the internet, all at the same time.
There are techniques which make it possible that a file is available as certain snapshots to a given timestamp. The key word would be "Shadow Copy" under Windows. Without going into details, this technique allows for example defragmenting a file while it is being edited in some application or a backup could copy a (large) file while a delete operation is run at the same file. The operating system ensures that the access time is considered when a process requests access to a file. So the OS could let the backup finish first, until it schedules the delete operation to run, since this was started after the backup (in this example) or could do even more sophisticated things to present a synchronized file system state, even if it is actually changing at the moment.

Is it possible to write a program that will set the computer on fire?

Let’s assume you have administrator access, and that this is a run of mill laptop or desktop. Is it possible to write a program that will result in a fire or something equally as destructive?
EDIT:
To the ”how do you think bombs work” answer: valid answer, but I’m asking about if I have a pocket universe with just a laptop, is it possible to have a program that when run, will set the computer on fire?
It isn't impossible, but with most off the shelf goods, it is unlikely you will find a deterministic way to do it. Groups like CSA, Underwriters, ETL, are pretty careful about what they give the stamp of approval to.
Depending upon that last time you have flown in the US, you may have heard various warnings that you are not to carry a certain brand of Samsung Phone or Apple Laptop on board; further you are not allowed to store them in your luggage, and if you drop one between the seats, to notify the attendants.
These are all precautions because the FAA has determined that these devices pose a fire risk, presumably due to over-heating. So, if you run caffeinate -- which prevents sleeping -- and ran a heavy workload, you could induce the high enough temperatures to cause ignition.
But, heavy on the could. There are a lot of defenses built into the batteries themselves to prevent this; then there are system management components in the computer to prevent this; then there are monitoring components on the CPU to prevent this. So, whatever you do, has to line up some failure mode of all of these systems simultaneously.
Not impossible, but maybe not far from it.

Meaning of SigQuit in Swift 3, Xcode 8.2.1

I am trying to create a custom keyboard in iOS 10 that acts like a T-9 keyboard. When switching to my custom keyboard, the app extension reads in a list of about 10,000 words from a txt file and builds a trie out of them.
However, I keep getting a "SigQuit" error when I first try to use the keyboard. Rerunning the keyboard right after it failed seems to usually work. Xcode doesn't give me any explanation for why it failed other than the SigQuit error on some assembly code line.
So, my question is, for what reasons might Xcode throw a SigQuit error? I have tried debugging to no avail, and googling SigQuit does not seem to return any useful information. I considered that my keyboard is using too many resources / taking up too much time on startup, but I checked the CPU usage and it peaked at less than 1%. Similarly, the memory used was something like 25mb which doesn't seem terrible.
Keyboard extensions have a much lower memory limit than apps. Your extension was likely killed by the operating system.
See: https://developer.apple.com/library/content/documentation/General/Conceptual/ExtensibilityPG/ExtensionCreation.html
Memory limits for running app extensions are significantly lower than
the memory limits imposed on a foreground app. On both platforms, the
system may aggressively terminate extensions because users want to
return to their main goal in the host app. Some extensions may have
lower memory limits than others: For example, widgets must be
especially efficient because users are likely to have several widgets
open at the same time.
Yeah, seems like you have to Run, then Stop, and it'll run fine on the simulator or device.

Using socket interface keeps sending overflow warnings

I'm establishing a watch via the socket interface, and then subscribing to changes.
For each incoming PDU, if the map has a "warning" key, I output the warning to the console/user, as the docs suggest.
However, when an overflow happens, it looks like I don't get the "warning" key just once, but instead every incoming PDU has the same warning ("recrawl happened 1 time") over and over (AFAICT?), so I end up spamming the console with the same error message.
For me, it'd be preferable if Watchman only sent the "warning" key once per overflow event. Otherwise I'm looking at having to cache the "warnings already shown to the user" to avoid spamming the console.
Also, in terms of overflow behavior in general, the warning says:
To resolve, please review the information on
https://facebook.github.io/watchman/docs/troubleshooting.html#recrawl
To clear this warning, run:
`watchman watch-del ... ; watchman watch-project ...`
But I'd prefer to have a way to reset the warning without having to cancel and resubscribe my subscription. E.g. right now I have to control-c kill my program, run the watchman watch-del command, then restart my program.
Which I could automate internally, e.g. have my program detect the "overflow happened" warning message, kill it's subscription, issue a watch-del, and then re-issue the watch.
But, even if I could reset the warning via the socket interface, or do this internal watch-del, I'm wondering why the warning needs reset at all--e.g. in theory if watchman has already done a recrawl, and I told the user it happened (by logging it to the console), shouldn't things be fine now? Why is the watch-del + re-watch required in the first place?
E.g. as long as overflows are not happening constantly, it seems like watchman doing the recrawl (so gets back in sync with the file system) + issuing one warning PDU should mean everything is back to normal, and my user program could, ideally, stay dumb/simple and just keep getting the post-overflow/post-recrawl PDUs on it's same/existing subscription.
Sorry that this isn't as clear as it could be.
First: you don't strictly have to take action here, as the watchman service already recovered from the overflow.
It's just advising you that you may have a local configuration problem; if you are on Linux you might consider increasing the various inotify sysctl parameters. If you are on the mac there is very little you can do about this. The warning is sticky so that it is forced in the face of the user. We later had users request a way to suppress it, so the suggestion for deleting and restarting the watch was added.
The result was a pretty noisy warning that caused more confusion.
In watchman 4.7 we added a configuration option to turn off this warning: https://facebook.github.io/watchman/docs/config.html#suppress_recrawl_warnings
The intent here is to hide the warning from users that don't know how (or don't have permission) to remediate the system configuration. It works well in conjunction with the (undocumented) perf sampling configuration options that can record and report the volume of recrawls to a site-specific reporting system.