I have a system that has both a discrete and integrated graphics cards (one Nvidia, and the other Intel).
To my surprise I found that I could hook up a monitor to each one individually.
Moreover, I could play a game in a window on the monitor attached to the discrete card, and drag it to the other monitor attached to the integrated card (albeit with a drop in performance).
I also noticed that in the first case, only the discrete card was busy, but once I moved the window, both cards were busy.
I realize this is probably not an optimal configuration, but it got me curious as to how the OS handles this situation? There must be some communication going on between the cards for this to work, such as one card doing the actual computation and the other outputting the result.
Does anyone have a any insight into this?
Typically when you have an intel card and a separate card, much like you have, the integrated is used as default. When the machine is rendering graphics at a certain threshold, the separate card will kick in and take over.
Related
we're using the HoloLens 2 and Azure world anchors to place the content within a room but when viewing the app content the anchors/room are spun 90% or 180% from where they should be. They are always square on and never diagonal, so it's like the room is being flipped.
We’re confident it's not the code, as we've successfully used it in multiple other locations, it just this one room we can't get it to work in.
To ensure it’s not our code we’ve tried rebuilding the app, as well as ensuring we’re on the latest versions of Unity and MRTK. The HoloLens 2 firmware are also up-to-date and we’ve also tried adding a delay to the code, just in case the network's too fast and it needs some latency to do it’s thing but none of this has helped.
Additionally, the anchors stick in a position for 5-20mins, then will move to a new position - so occasionally they seem to be working but even a broken clock is right twice a day!
There’s no consistency across headsets either and the anchors will be a different position for each one. The app and code is rock solid in all other locations and we've tested placing the content on another wifi network and the same wifi network but another location and all works as it should. We've tried removing all and nearby holograms to refresh the spatial map but this doesn't cure the issue. We've also tried making the room less complex as well as more complex but again this doesn't fix things and if it does it's only temporary.
Our gut feeling is it’s something out of our control, such as wifi points throwing it out of whack, or interference of some kind such as magnetic or radio etc. If it's useful or relevant we also use Photon to facilitate a shared experience across devices.
Any suggestions much appreciated!
What is the maximum polycount unity can handle, I am building a furniture-based application in unity, my client wants high detailed 3D models. What is the maximum unity can handle, are there any best practices to keep the quality of assets and not affect the performance of the application. I am building the application for the following platforms, OSX, Windows, Android and iOS.
There isn't a hard techincal limit, other than 65k verticies per single mesh (if using default, 16bit indexing), but I don't think there's a limit on mesh count, and if you are reusing meshes, you can draw them using Graphics.DrawMeshInstanced, or if you fancy some buffer work, Graphics.DrawMeshInstancedIndirect.
With relatively modern graphics card, enough ram etc, you can easily work in the million range. You can push if further (tens of millions? hundreds of millions?) but at some point, ineviteably performance will go down.
So basically the answer is : any number that will fit in your RAM, with a secondary constraint dependeind on your hardware if you want it realtime (unity will render any amount of polygons if it doesn't run out of memory, but ludicrous amount of poly might take multiple seconds to render)
If you want detailed objects and high performance, you can use level of detail, you can find more information here: https://docs.unity3d.com/560/Documentation/Manual/LevelOfDetail.html#:~:text=and%20OpenGL%20Core-,Level%20of%20Detail%20(LOD),on%20it%20is%20greatly%20reduced.&text=An%20optimisation%20technique%20called%20Level,its%20distance%20from%20camera%20increases.
For Android devices, it's recommendable to maintain the poly count of the scene under 90k, in PC devices, this number can increase to 1 or 2 million.
But there are techniques like occlusion culling that increases the game performance.
See this link about it: https://docs.unity3d.com/Manual/OcclusionCulling.html
I am trying to perform Time Difference of Arrival in real-time using the PS3 Eye. Since it has a built-in 4 microphone array, I've successfully rearranged the array into a square array and cross-correlated the signals using MATLAB to obtain a relatively accurate TDOA algorithm. However, so far I've been recording the signal, saving the files (4 individual files for each microphone in the array), and then feeding those files into MATLAB to read after-the-fact.
My problem is: MATLAB doesn't recognize the PS3 Eye's microphones separately; it only recognizes it as a whole. So far, Audacity is one of the few programs that actually works well in doing so, but I am inexperienced in using the program and don't know its real-time capabilities. Anyone have suggestions as to how can I can perform real-time signal analysis in this manner? If using something else besides the PS3 Eye would work better, then I am open to suggestions. Thanks.
I know very little about MATLAB or PS3 eye, but various hardware microphones allow you to capture a single audio stream containing multiple (typically 2) channels. The audio data will come to you in frames, each frame containing a single sample for each channel.
I'm not really sure what you mean by "recognizes as a whole", but I assume you mean MATLAB is mixing the channels so that the device only produces one usable channel. If you can capture the channels to file, and they all originate from the same device (i.e. hardware clock), you should be fine except that this solution is not "realtime".
There is a similar discussion on Sound Exchange which ends up suggesting the Microcone. There are a variety of other products, from microphone arrays to digital mixers for analog mic sources, also, but your question seems to be mainly about how to get the data with software.
In short, make sure you are seeing a single device with multiple channels. This will ensure each channel uses the same hardware clock and will prevent drift issues.
This is just a wild guess as I don't know know about MATLAB real time input options.
Maybe try reaper ( http://www.reaper.fm/ ).. it has great multi track capabilities and you can extend it (I think the scripting language is python ). Nice documentation and third party contributions, OSC and Rewire support. So maybe you could think of routing the audio to reaper, doing some data normalization there in python and then route data to MATLAB.
Or you could use PURE DATA which is open source and very open, with lots of patches (basic processing units) that you could probably put together.
HTH
BTW I am in no way affiliated wit reaper or PD.
EDIT: you might also want to consider supercollider (http://supercollider.github.io/) or Chuck (http://chuck.cs.princeton.edu/)
Here's a lead, but I haven't been able to test it, yet.
On Windows, you can record a single 4 track ogg audio file from the Eye with Audacity (using the WASAPI driver selection).
As of 23 Jul 2014, the pa-wavplay for 32-bit and 64-bit MEX supports WASAPI. You will have to rebuild the PortAudio library to select the WASAPI interface as described here and get all four tracks in MatLab (in Windows).
Sadly, if you're not on Windows, I don't have any suggestions. Adjusting the PortAudio build might help, but I only know that WASAPI works with the Eye.
I want my students to use Enchanting a derivative of Scratch to program Mindstorm NXT robots to drive a pre-programmed course, follow a line and avoid obstacles. (Two state, five state and proportional line following.) Is Enchanting developed enough for middle school students to program these behaviors?
I'm the lead developer on Enchanting, and the answer is: Yes, definitely.
The video demoing Enchanting 0.0.4 shows how to make a proportional line follower (and you could extend it to use a PID controller, if you wish). If you download the latest version, 0.2.2, it includes a sample showing a two-state line follower (and you can see a video and download code here). You, or with some instruction / playing around, a middle-schooler, can readily create a program to do n-states, and, especially if you follow a behaviour-oriented approach, you can avoid obstacles at the same time.
As far as I know, yes and no.
What Scratch does with its sensor board, Lego Wedo, and the S4A - Scratch for Arduino - version as well as, I believe, with NXT is basically use its remote sensor protocol - it exchanges messages on TCP port 42001.
A client written to interface that port with an external system allows communication of messages and sensor data. Scratch can pick up sensor state and pass info to actuators, every 75ms according to the S4A discussion.
But that isn't the same as programming the controller - we control the system remotely, which is already very nice, but we're not downloading a program in the controller (the NXT brick) that your robot can use to act independently when it is disconnected.
Have you looked into 12blocks? http://12blocks.com/ I have been using it for Propeller and it's great and it has the NXT option (I have not tested)
It's an old post, but I'll answer anyway.
Enchanting looks interesting, and seems to be still an active project.
I would actually take the original Scratch (1.4), as it's is more familiar and reliable.
It's easy to interface hardware with Scratch using the remote sensor protocol. I use a simple serial interface (over a USB-adapter) which provides 3 digital inputs and 3 digital outputs. With that, it's possible to implement projects such as traffic lite, light/water/heat-sensors, using only lets, resistors, reed-contacts, photo-transistors, switches, PTSs.
The costs are < 5$
For some motor-based projects like factory belts, elevator, etc. There is not much more required, a battery and a couple of transistors/relais/motor driver.
I need to develop the real-time application which can handle user's input (from some external control panel) as fast as possible and provide some output to LCD monitor (very fast as well).
To be more exact - I need to handle fixed-time interrupts (with period of 1ms) to recalculate internal model - with current state fetched from external control panel.
When internal model is changed i need to update a picture on LCD monitor (now I think the most proper way is to update on each interrupt). Also don't want any delays here.
What is the most suitable platform to implement it? And also which one is the most cost-effective?
I've heard about QNX, IntervalZero RTX, rtlinux but don't know the details and abilities of each one.
Thanks!
As far as the different OSs, I know QNX has very good "hard" real time and has been built & optimized from the ground up. It also now has Qt running on it (QNX 6.5) for full featured GUIness.
I have heard (2nd hand) anecdotal information that rtlinux is very close to hard realtime (guaranteed realtime), but it can sometimes be late if a driver (usually 3rd party) is not coded well. [This was from a RTOS vendor, so take it for what it is worth.]
As a design issue, I'd decouple the three separate operations into three threads with different priorities: one thread to fetch the data and set a semaphore that new data is ready, one thread to update the model and set a semaphore that the model is ready, and one thread to update the GUI. I would run the GUI thread at a much slower update rate. Most monitors are in the 60-120Hz range for updating. Why update faster than the data can be shown on the screen?