How to display simulation result from Dymola in LabVIEW environment? - interface

I would like to simulate a radial fan test rig in Dymola.
The test rig itself acquire the data from several sensors using LabVIEW.
My planning is to compare the simulation result from Dymola and the data from sensors in real time in LabVIEW environment.
In the same time also I want to variate the system by changing rotational speed and opening valve while the test rig is running.
Are there any possibilities to do so?
I attached a figure of the schematic of test rig here.

Related

How does OS combine discrete and integrated graphics cards?

I have a system that has both a discrete and integrated graphics cards (one Nvidia, and the other Intel).
To my surprise I found that I could hook up a monitor to each one individually.
Moreover, I could play a game in a window on the monitor attached to the discrete card, and drag it to the other monitor attached to the integrated card (albeit with a drop in performance).
I also noticed that in the first case, only the discrete card was busy, but once I moved the window, both cards were busy.
I realize this is probably not an optimal configuration, but it got me curious as to how the OS handles this situation? There must be some communication going on between the cards for this to work, such as one card doing the actual computation and the other outputting the result.
Does anyone have a any insight into this?
Typically when you have an intel card and a separate card, much like you have, the integrated is used as default. When the machine is rendering graphics at a certain threshold, the separate card will kick in and take over.

Is it possible to send realtime commands from PC to PLC

We are using a frequency inverter to power a servo motor. This has to be programmed using PLC. Is it possible to gather data from a running program, using values from that to control the movements / frequency of the inverter?
(as an example; We built a racing game, we'd like to build a simulation chair that can support a grown person and act on accelaration / braking etc in the game)
Thanks
Yes, I believe what your asking is possible. I personally use a VB script running on a PC to write to registers in a PLC... so that's one way to do it.

Neuroph Studio: train, stop, pause and test button disabled

I have installed NeurophStudio 2.91 for Windows. I have created a Neural Network using Multilayer Perceptron and loaded some training data. Now I`d like to train the Neural Network, but the buttons train, stop, pause and test are disabled. How can I enable them? I also had the problem that the program does not react several times and I had to restart it. Installing it new did not help.
Not sure if you're still looking for an answer, but I was struggling with the same this morning. Turned out that you have to drag the training set from the files pane, and drop it on the top-most layer of the NN (input layer). Once you do that, the train, stop, test input buttons become enabled. Consult http://neuroph.sourceforge.net/tutorials/MultiLayerPerceptron.html for more info.
Have only been using Neuroph for a few hours, and it was so erratic that it drove me crazy. But I realized that once you understand how things work, it'll work as expected most of the times. For an example, if you run a training session where your network architecture and training set are such that the training might never converge, and you haven't set a max iterations limit, then it'll keep training forever. Many of the times the graphs aren't showing properly in real-time, so you'd never know that the training is still going on, but it'll slow down your machine to a crawl.
Didn't find 'enough' material for a complete newb to get going, and I'm still struggling with some very basic stuff (like how to reset a Neural Network from the studio), but it seems worth it to plough through the initial stumbles just for the sake of having a tool to design/train/troubleshoot your NN in a gui and then export it to your app, alone.
i also face your problem a hours ago, and that was cus of its version i was using 2.93 then i uninstall it and i install 2.8 and i work perfectly for me
hope this will help u!!!

Recording multi-channel audio input in real-time

I am trying to perform Time Difference of Arrival in real-time using the PS3 Eye. Since it has a built-in 4 microphone array, I've successfully rearranged the array into a square array and cross-correlated the signals using MATLAB to obtain a relatively accurate TDOA algorithm. However, so far I've been recording the signal, saving the files (4 individual files for each microphone in the array), and then feeding those files into MATLAB to read after-the-fact.
My problem is: MATLAB doesn't recognize the PS3 Eye's microphones separately; it only recognizes it as a whole. So far, Audacity is one of the few programs that actually works well in doing so, but I am inexperienced in using the program and don't know its real-time capabilities. Anyone have suggestions as to how can I can perform real-time signal analysis in this manner? If using something else besides the PS3 Eye would work better, then I am open to suggestions. Thanks.
I know very little about MATLAB or PS3 eye, but various hardware microphones allow you to capture a single audio stream containing multiple (typically 2) channels. The audio data will come to you in frames, each frame containing a single sample for each channel.
I'm not really sure what you mean by "recognizes as a whole", but I assume you mean MATLAB is mixing the channels so that the device only produces one usable channel. If you can capture the channels to file, and they all originate from the same device (i.e. hardware clock), you should be fine except that this solution is not "realtime".
There is a similar discussion on Sound Exchange which ends up suggesting the Microcone. There are a variety of other products, from microphone arrays to digital mixers for analog mic sources, also, but your question seems to be mainly about how to get the data with software.
In short, make sure you are seeing a single device with multiple channels. This will ensure each channel uses the same hardware clock and will prevent drift issues.
This is just a wild guess as I don't know know about MATLAB real time input options.
Maybe try reaper ( http://www.reaper.fm/ ).. it has great multi track capabilities and you can extend it (I think the scripting language is python ). Nice documentation and third party contributions, OSC and Rewire support. So maybe you could think of routing the audio to reaper, doing some data normalization there in python and then route data to MATLAB.
Or you could use PURE DATA which is open source and very open, with lots of patches (basic processing units) that you could probably put together.
HTH
BTW I am in no way affiliated wit reaper or PD.
EDIT: you might also want to consider supercollider (http://supercollider.github.io/) or Chuck (http://chuck.cs.princeton.edu/)
Here's a lead, but I haven't been able to test it, yet.
On Windows, you can record a single 4 track ogg audio file from the Eye with Audacity (using the WASAPI driver selection).
As of 23 Jul 2014, the pa-wavplay for 32-bit and 64-bit MEX supports WASAPI. You will have to rebuild the PortAudio library to select the WASAPI interface as described here and get all four tracks in MatLab (in Windows).
Sadly, if you're not on Windows, I don't have any suggestions. Adjusting the PortAudio build might help, but I only know that WASAPI works with the Eye.

How is time-based programming done?

I'm not really sure what the correct term is, but how are time-based programs like games and simulations made? I've just realized that I've only wrote programs that wait for input, then do something, and am amazed that I have no idea how I would write something like pong :)
How would something like a flight simulator be coded? It obviously wouldn't run as fast as the computer could run it. I'm guessing everything is executed on some kind of cycle. But how do you handle it when a computation takes longer than the cycle.
Also, what is the correct term for this? Searching "time-based programming" doesn't really give me helpful results.
Games are split into simulation (decide what appears, disappears or moves) and rendering (show it on the screen, play sounds). Simulation is designed to be time-dependent: you can tell the simulator "50ms have elapsed" and it will compute 50ms worth of simulation. A typical game loop will render (which takes an arbitrary amount of time), then run the simulator for the duration since the last time the simulator was run.
If the code runs fast, then the simulator steps will be short (only a few ms) and the game will render the scene more often.
If the code runs slowly, the simulator steps will have longer steps and there will be proportionally fewer renders.
If the simulator runs slower than the simulation itself (it takes 100ms to compute 50ms worth of simulation) then the game cannot run. But this is an exceedingly rare situation, and games sometimes have emergency systems that drop the quality of the simulation to improve performance when this happens.
Note that time-dependent does not necessarily mean millisecond-level precision. Some systems implement simulations using time-based functions (traveled distance equals speed times elapsed time), while others run fixed-duration simulation steps.
I think the correct term is "Real-time application".
For the first question, I'm with spender's answer.
If you know the elapsed time between two frames, you can calculate (with physics, for example) the new position of the elements based on the previous ones.
There are two approaches to this, each with advantages and disadvantages.
You can either go frame based, whereby a timer signals n new frames every second. You calculate movement simply by counting elapsed frames. In the case that computation exceeds the available time, the game slows down.
...or, keeping the frame concept, but this time you keep an absolute measure of time, when the next frame is signalled, you calculate world movement via the amount of elapsed time. This means that stuff happens in real-time, but in the case of severe CPU starvation, gameplay will become choppy.
There's an old saying that "the clock is an actor". Time-based programs are event-driven programs, but the clock is a constant source of events. At least, that's a fairly common and reasonably easy way of doing things. It falls down if you're doing hard realtime or very high performance things.
This is where you can learn the basics:
http://www.gamedev.net/reference/start_here/
Nearly all of the games are programmed in real time architecture and the computer capabilities(and the coding of course :)) determine the frame rate.
Game programming is a really complex job including object modeling, scripting, math calculations, fast and nice rendering algorithms and some other stuff like pixel shaders.
So i would recommend you to check out available engines in the first place.(just google "free game engine")
Basic logic is to create an infinite loop (while(true){}) and the loop should:
Listen for the callbacks - you get the keyb, mouse and system messages here.
Do the physics due to the time passed till the previous frame and user inputs.
Render the new frame (gdi, derictX or openGL)
Have fun
Basically, there are 2 different approaches that allow you to add animation to a game:
Frame-based Animation: easier to understand and implement but has some serious disadvantages. Think about it this way: imagine your game runs at 60FPS and it takes 2 seconds to draw a ball that goes from one side of the screen to the other. In other words, the game needs 120 frames to move the ball along the screen. If you run this game on a slow computer that's only able to render 30FPS, it means that after 2 seconds the ball will be at the middle of the screen. So the problem of this approach is that rendering (drawing the objects) and simulation (updating the positions of the objects) are done by the same function.
Time-based Animation: a sophisticated approach that separates the simulation code from the rendering code. The amount of FPS the computer can render will not influence the amount of movement (animation) that has to be done in 2 seconds.
Steven Lambert wrote a fantastic article about these techniques, as well as 3rd approach that solves a few problems with Time-based Animation.
Some time ago I wrote a C++/Qt application to demonstrate all these approaches and you can find a video of the prototype running here:
Source code is available on Github.
Searching for time-based movement will give you better results.
Basically, you either have a timer loop or an event triggered on a regular clock, depending on your language. If it's a loop, you check the time and only react every 1/60th of a second or so.
Some sites
http://www.cppgameprogramming.com/
Ruby game programming
PyGame
Flight Simulation is one of the more complex examples of real-time simulations. The understanding of fluid dynamics, control systems, and numerical methods can be overwhelming.
As an introduction to the subject of flight simulation, I recommend Build Your Own Flight Sim in C++. It is out of print, but seems to be available used. This book is from 1996, and is horribly dated. It assumes a DOS environment. However, it provides a good overview of the topics, covers numerical integration, basic flight mechanics and control systems. The code examples are simplistic, reasonably complete, and do not assume the more common toolsets used for graphics today. As with most things, I think it is easier to learn the subject with a more basic reference.
A more advanced text (college senior, first year graduate school) is Principles of Flight Simulation provides excellent coverage of the breadth of topics involved in making a flight simulation. This book would make an excellent reference for anyone seriously interested in flight simulation as an engineering task, or for more realistic game development.