distribute simulink desktop realtime model - simulink

Recently i was try to develop some simple SIMULINK model which receive UDP packet, make some calculation and return answer via other UDP port. Model work just fine, i was able to compile to EXE - no problem.
My goal was that model to work in real time - mean 1 second in simulation to be equal to 1 second in PC. So after research i discover that block:
Real Time Sync
which do the trick - now my simulation is work exactly as I want. Next when I try to build project - after make all changes in settings according documentation (mainly change target to sldrt.tlc) - at end of compile process i've got this:
### Created Simulink Desktop Real-Time module udpTest.rxw64
C:/PROGRA~1/MATLAB/R2017b/toolbox/sldrt/clang/win64/llvm-link-bca \
-Bstatic \
-o udpTest.bc \
udpTest.obj rtGetInf.obj rtGetNaN.obj rt_nonfinite.obj udpTest_data.obj udpTest_tgtconn.obj sldrt_main.obj rt_sim.obj ext_svr.obj updown_sldrt.obj \
\
\
C:/PROGRA~1/MATLAB/R2017b/toolbox/sldrt/lib/win64/imports.obj \
C:/PROGRA~1/MATLAB/R2017b/toolbox/sldrt/lib/win64/sldrtlib.lib
C:/PROGRA~1/MATLAB/R2017b/toolbox/sldrt/clang/win64/llc -mtriple=x86_64-pc-win32 -O3 -O3 -filetype=obj -o ../udpTest.rxw64 udpTest.bc
Build process completed successfully
As far as I understand I can load that rxw64 file in simulink in external mode and control it - all that is ok, I've done it. But is it possible to distribute that to dedicated PC?
PS: Sorry for long description, but I'm feel really confused and i want to give all details

Case closed. The answer is that I can't distribute my model as separate application. I must set up a target PC which must be dedicated to run binary equivalent of my model. Now - going forward to search a suitable DOS-like boot setup, and maybe try in some kind of virtual PC

Related

How can I see who is hogging all of the resources on sungrid engine?

At my job we using sungrid qstat, qsub, etc.
Is there a way to see the percentage of resources currently used by each user? I know there is qhost -u "*" but this is a bit more difficult to interpret b/c it doesn't show how many resources are being used with respect to what is available.
If this is out of scope for SO then I will remove.
Are there are any built in tools that do this or public scripts on GitHub that can achieve this functionality?
The command qstat -u "*" -nenv -j "*" outputs job details, including a line with job's usage:
usage 1: wallclock=44:12:05:42, cpu=1:10:40:01, mem=9284973.79642 GBs, io=631.16018 GB, iow=65.130 s, ioops=22213570, vmem=284.719M, maxvmem=65.121G, rss=14.435M, ..., maxrss=61.611G, maxpss=68.641G
I am not aware of a public script that would parse it and cross reference the output of qhost to retrieve hosts resources.
I think I should be working on this over the weekend. :)

Can a private Cardano network be created?

In Ethereum we can use geth to create a private network, for example by defining a genesis block with puppeth and then creating nodes.
Is there an equivalent of geth in Cardano and can we create private networks?
Don't know much about Ethereum but to set up private network for cardano you need "Cardano-sl". Do set it up on your local or VPS according to this instruction https://github.com/input-output-hk/cardano-sl/blob/develop/docs/how-to/build-cardano-sl-and-daedalus-from-source-code.md . After downloading and building binaries from either nix or stack mode you need to connect your node to mainnet or testnet as per your requirement follow this link for the same: https://github.com/input-output-hk/cardano-sl/blob/develop/docs/how-to/connect-to-cluster.md .
Now your node should start downloading blocks and it will take some time to complete sync. you can check synchronization progress by using simple curl command: curl -X GET https://localhost:8090/api/v1/node-info: also you need to provide certs with the request or can call with insecure option by proving -k option with the request, see API reference for complete info: https://cardanodocs.com/technical/wallet/api/v1/#
And once your node will be in sync, you can call APIs and create your wallet, accounts and do ADA transactions.
Although, I skipped some steps but i hope still it will help many to get going.

Problems when using Chapel 1.19 along with GASNet PSM (OmniPath) substrate

After Changing to version 1.19, but using Omnipath implementation, I'm randomly receiving the following error: ERROR calling: gasnet_barrier_try(id, 0).
I know that the Omnipath implementation of GASNet is no longer supported by the current version of Chapel. However, I would like to use some features available only in version 1.19, and the cluster I use runs over an Omnipath network.
In order to use the PSM substrate (OmniPath), I proceed as suggested by Chapel's Gitter community:
export CHPL_GASNET_ALLOW_BAD_SUBSTRATE=true
wget https://gasnet.lbl.gov/download/GASNet-1.32.0.tar.gz
tar xzf GASNet-1.32.0.tar.gz
rm -rf $CHPL_HOME/third-party/gasnet/gasnet-src
mv GASNet-1.32.0 $CHPL_HOME/third-party/gasnet/gasnet-src
Then, I setup other variables:
export CHPL_COMM='gasnet'
export CHPL_LAUNCHER='gasnetrun_psm'
export CHPL_COMM_SUBSTRATE='psm'
export CHPL_GASNET_SEGMENT='everything'
export CHPL_TARGET_CPU='native'
export GASNET_PSM_SPAWNER='ssh'
export HFI_NO_CPUAFFINITY=1
Next, I build the runtime, etc.
However, when I run experiments, I randomly receive the following error:
ERROR calling: gasnet_barrier_try(id, 0)
at: comm-gasnet.c:1020
error: GASNET_ERR_BARRIER_MISMATCH (Barrier id's mismatched)
Which finishes the execution of the program.
I cannot find in GASNet documentation the reason for this error. I could only find a bit of information on GASNet's code.
Do you know what's the cause of this problem?
Thank you all.
I realize this is an old question, but for the record the current version of Chapel (1.28.0) now embeds a version of GASNet (GASNet-EX 2022.3.0 as of this writing) that provides CHPL_COMM=gasnet CHPL_COMM_SUBSTRATE=ofi (aka GASNet ofi-conduit) that provides high-quality support for Intel Omni-Path.
In particular, there should no longer be any reason to clobber Chapel's embedded version of GASNet-EX with an ancient/outdated GASNet-1 to get Omni-Path support, as suggested in the original question.
For more details see Chapel's detailed Omni-Path instructions.

Robotframework - workflow design

At the moment I'm using Taskflow to specify my test workflow. I'm trying to understand if Robotframework can be used for my tests scenario.
For example, my typical test is:
- Start traffic on device1
- While traffic is flowing:
- Collect via SSH realtime traffic data on device2
- Collect via SSH realtime traffic data on device3
- Stop traffic on a device1
- Get output data from device2 and device3
- Check outputs
I did not find any workflow detail for Robotframework. Is it possible to design such a test in RF?
Riccardo
I believe it can be used for your scenario.
Robot Framework uses external libraries such as SSHLibrary
Here is a documentation to said library with description of concepts, keywords you can use and with examples.
A lot of things are generally possible with Robot Framework as you can always expand it's capabilities by writing your own external libraries if the commonly used ones are not matching your needs.
But it seems that this library might do exactly what you need.
You can open several connections
You can start/execute command
You can log to file or read output
...

How do you access a MongoDB database from two Openshift apps?

I want to be able to access my MongoDB database from 2 Openshift apps- one app is an interactive database maintenance app via the browser, the other is the principle web application which runs on mobile devices via an Openshift app. As I see it in Openshift, MongoDB gets set up within a particular app's folder space, not independent of that space.
What would be the method to accomplish this multiple app access to the database ?
It's not ideal but is my only choice to merge the functionality of both Openshift apps into one ? That's tastes like a bad plate of spaghetti.
2018 update: this applies to Openshift 2. Version 3 is very different, and however the general rules of linux and scaling apply, the details got obsolete.
Although #MartinB answer was timely and correct, it's just a link, so let me put the essentials here.
Assuming that setting up a non-shared DB is already done, you need to find it's host and port. You can ssh to your app (the one with the DB) or use the rhc:
rhc ssh -a appwithdb
env | grep MONGODB
env brings all the environment variables, and grep filters them to show only Mongo-related ones. You should see something like:
OPENSHIFT_MONGODB_DB_HOST=xxxxx-yyyyy.apps.osecloud.com
OPENSHIFT_MONGODB_DB_PORT=zzzzz
xxxxx is the ID of the gear that Mongo sits on
yyyyy is your domain/namespace
zzzzz is MongoDB port
Now, you can use these to create a connection to the DB from anywhere in your Openshift environment. Another application has to use the xxxxx-yyyyy:zzzzz URL. You can store them in custom variables to make maintenance easier.
$ rhc env-set \
MYOWN_DB_HOST=xxxxx-yyyyy \
MYOWN_DB_PORT=zzzzz \
MYOWN_DB_PASSWORD=****** \
MYOWN_DB_USERNAME=admin..... \
MYOWN_DB_NAME=dbname...
And then use the environment variables instead of the standard ones. Just remember they don't get updated automatically when the DB moves away.
Please read the following article from the open shift blog: https://blog.openshift.com/sharing-database-across-applications/