How can I see who is hogging all of the resources on sungrid engine? - server

At my job we using sungrid qstat, qsub, etc.
Is there a way to see the percentage of resources currently used by each user? I know there is qhost -u "*" but this is a bit more difficult to interpret b/c it doesn't show how many resources are being used with respect to what is available.
If this is out of scope for SO then I will remove.
Are there are any built in tools that do this or public scripts on GitHub that can achieve this functionality?

The command qstat -u "*" -nenv -j "*" outputs job details, including a line with job's usage:
usage 1: wallclock=44:12:05:42, cpu=1:10:40:01, mem=9284973.79642 GBs, io=631.16018 GB, iow=65.130 s, ioops=22213570, vmem=284.719M, maxvmem=65.121G, rss=14.435M, ..., maxrss=61.611G, maxpss=68.641G
I am not aware of a public script that would parse it and cross reference the output of qhost to retrieve hosts resources.
I think I should be working on this over the weekend. :)

Related

distribute simulink desktop realtime model

Recently i was try to develop some simple SIMULINK model which receive UDP packet, make some calculation and return answer via other UDP port. Model work just fine, i was able to compile to EXE - no problem.
My goal was that model to work in real time - mean 1 second in simulation to be equal to 1 second in PC. So after research i discover that block:
Real Time Sync
which do the trick - now my simulation is work exactly as I want. Next when I try to build project - after make all changes in settings according documentation (mainly change target to sldrt.tlc) - at end of compile process i've got this:
### Created Simulink Desktop Real-Time module udpTest.rxw64
C:/PROGRA~1/MATLAB/R2017b/toolbox/sldrt/clang/win64/llvm-link-bca \
-Bstatic \
-o udpTest.bc \
udpTest.obj rtGetInf.obj rtGetNaN.obj rt_nonfinite.obj udpTest_data.obj udpTest_tgtconn.obj sldrt_main.obj rt_sim.obj ext_svr.obj updown_sldrt.obj \
\
\
C:/PROGRA~1/MATLAB/R2017b/toolbox/sldrt/lib/win64/imports.obj \
C:/PROGRA~1/MATLAB/R2017b/toolbox/sldrt/lib/win64/sldrtlib.lib
C:/PROGRA~1/MATLAB/R2017b/toolbox/sldrt/clang/win64/llc -mtriple=x86_64-pc-win32 -O3 -O3 -filetype=obj -o ../udpTest.rxw64 udpTest.bc
Build process completed successfully
As far as I understand I can load that rxw64 file in simulink in external mode and control it - all that is ok, I've done it. But is it possible to distribute that to dedicated PC?
PS: Sorry for long description, but I'm feel really confused and i want to give all details
Case closed. The answer is that I can't distribute my model as separate application. I must set up a target PC which must be dedicated to run binary equivalent of my model. Now - going forward to search a suitable DOS-like boot setup, and maybe try in some kind of virtual PC

REST api monitoring (Progress Openedge)

Is there any monitoring tool for REST application(Progress Openedge) that can check:
whether the service is up and running or not
appserver is up and running
hit count and other information for each api resources
error logging
customized reporting(such as sending report in mail)
I saw RESTMAN Utility in the documentation but couldn't find the details as i expected/needed. Can it do the things mentioned above? If yes, how to implement and customize it?
(Progress version: 11.3)
Mahesh
I had a quick look in the current online OpenEdge documentation (which is for 11.7) and found this - https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/asadm/using-the-restman-utility.html
There are links to the functions that seem to show what you are looking for.
I haven't used it myself, but it looks like most of the Progress monitoring tools, so the issue you may have is that it provides "all" of the information that you need, but in a format that you have to parse before you get to the specific details you need.
Hope that helps?
As examples where I know that restmgr1 is the server:
Show if Restman Appserver is running
restman -i restmgr1 -q
Show all the deployed REST applications
restman -i restmgr1 -list

How do you access a MongoDB database from two Openshift apps?

I want to be able to access my MongoDB database from 2 Openshift apps- one app is an interactive database maintenance app via the browser, the other is the principle web application which runs on mobile devices via an Openshift app. As I see it in Openshift, MongoDB gets set up within a particular app's folder space, not independent of that space.
What would be the method to accomplish this multiple app access to the database ?
It's not ideal but is my only choice to merge the functionality of both Openshift apps into one ? That's tastes like a bad plate of spaghetti.
2018 update: this applies to Openshift 2. Version 3 is very different, and however the general rules of linux and scaling apply, the details got obsolete.
Although #MartinB answer was timely and correct, it's just a link, so let me put the essentials here.
Assuming that setting up a non-shared DB is already done, you need to find it's host and port. You can ssh to your app (the one with the DB) or use the rhc:
rhc ssh -a appwithdb
env | grep MONGODB
env brings all the environment variables, and grep filters them to show only Mongo-related ones. You should see something like:
OPENSHIFT_MONGODB_DB_HOST=xxxxx-yyyyy.apps.osecloud.com
OPENSHIFT_MONGODB_DB_PORT=zzzzz
xxxxx is the ID of the gear that Mongo sits on
yyyyy is your domain/namespace
zzzzz is MongoDB port
Now, you can use these to create a connection to the DB from anywhere in your Openshift environment. Another application has to use the xxxxx-yyyyy:zzzzz URL. You can store them in custom variables to make maintenance easier.
$ rhc env-set \
MYOWN_DB_HOST=xxxxx-yyyyy \
MYOWN_DB_PORT=zzzzz \
MYOWN_DB_PASSWORD=****** \
MYOWN_DB_USERNAME=admin..... \
MYOWN_DB_NAME=dbname...
And then use the environment variables instead of the standard ones. Just remember they don't get updated automatically when the DB moves away.
Please read the following article from the open shift blog: https://blog.openshift.com/sharing-database-across-applications/

gsutil make bucket command [gsutil mb] is not working

I am trying to create a bucket using gsutil mb command:
gsutil mb -c DRA -l US-CENTRAL1 gs://some-bucket-to-my-gs
But I am getting this error message:
Creating gs://some-bucket-to-my-gs/...
BadRequestException: 400 Invalid argument.
I am following the documentation from here
What is the reason for this type of error?
I got the same error. I was because I used the wrong location.
The location parameter expects a region without specifying witch zone.
Eg.
sutil mb -p ${TF_ADMIN} -l europe-west1-b gs://${TF_ADMIN}
Should have been
sutil mb -p ${TF_ADMIN} -l europe-west1 gs://${TF_ADMIN}
One reason this error can occur (confirmed in chat with the question author) is that you have an invalid default_project_id configured in your .boto file. Ensure that ID matches your project ID in the Google Developers Console
If you can make a bucket successfully using the Google Developers Console, but not using "gsutil mb", this is a good thing to check.
I was receiving the same error for the same command while using gsutil as well as the web console. Interestingly enough, changing my bucket name from "google-gatk-test" to "gatk" allowed the request to go through. The original name does not appear to violate bucket naming conventions.
Playing with the bucket name is worth trying if anyone else is running into this issue.
Got this error and adding the default_project_id to the .boto file didn't work.
Took me some time but at the end i deleted the credentials file from the "Global Config" directory and recreated the account.
Using it on windows btw...
This can happen if you are logged into the management console (storage browser), possibly a locking/contention issue.
May be an issue if you add and remove buckets in batch scripts.
In particular this was happening to me when creating regionally diverse (non DRA) buckets :
gsutil mb -l EU gs://somebucket
Also watch underscores, the abstraction scheme seems to use them to map folders. All objects in the same project are stored at the same level (possibly as blobs in an abstracted database structure).
You can see this when downloading from the browser interface (at the moment anyway).
An object copied to gs://somebucket/home/crap.txt might be downloaded via a browser (or curl) as home_crap.txt. As a an aside (red herring) somefile.tar.gz can come down as somefile.tar.gz.tar so a little bit of renaming may be required due to the vagaries of the headers returned from the browser interface anyway. Min real support level is still $150/mth.
I had this same issue when I created my bucket using the following commands
MY_BUCKET_NAME_1=quiceicklabs928322j22df
MY_BUCKET_NAME_2=MY_BUCKET_NAME_1
MY_REGION=us-central1
But when I decided to add dollar sign $ to the variable MY_BUCKET_NAME_1 as MY_BUCKET_NAME_2=$MY_BUCKET_NAME_1 the error was cleared and I was able to create the bucket
I got this error when I had capital letter in the bucket name
$gsutil mb gs://CLIbucket-anu-100000
Creating gs://CLIbucket-anu-100000/...
BadRequestException: 400 Invalid bucket name: 'CLIbucket-anu-100000'
$gsutil mb -l ASIA-SOUTH1 -p single-archive-352211 gs://clibucket-anu-100
Creating gs://clibucket-anu-100/..
$

What's the best Perl module for hierarchical and inheritable configuration?

If I have a greenfield project, what is the best practice Perl based configuration module to use?
There will be a Catalyst app and some command line scripts. They should share the same configuration.
Some features I think I want ...
Hierarchical Configurations to cleanly maintain different development and live settings.
I'd like to define "global" configurations once (eg, results_per_page => 20), have those inherited but override-able by my dev/live configs.
Global:
results_per_page: 20
db_dsn: DBI:mysql;
db_name: my_app
Dev:
inherit_from: Global
db_user: dev
db_pass: dev
Dev_New_Feature_Branch:
inherit_from: Dev
db_name: my_app_new_feature
Live:
inherit_from: Global
db_user: live
db_pass: secure
When I deploy a project to a new server, or branch/fork/copy it somewhere new (eg, a new development instance), I want to (one time only) set which configuration set/file to use, and then all future updates are automatic.
I'd envisage this could be achieved with a symlink:
git clone example.com:/var/git/my_project . # or any equiv vcs
cd my_project/etc
ln -s live.config to_use.config
Then in the future
git pull # or any equiv vcs
I'd also like something that akin to FindBin, so that my configs can either use absolute paths, or relative to the current deployment. Given
/home/me/development/project/
bin
lib
etc/config
where /home/me/development/project/etc/config contains:
tmpl_dir: templates/
when my perl code looks up the tmpl_dir configuration it'll get:
/home/me/development/project/templates/
But on the live deployment:
/var/www/project/
bin
lib
etc/config
The same code would magically return
/var/www/project/templates/
Absolute values in the config should be honoured, so that:
apache_config: /etc/apache2/httpd.conf
would return "/etc/apache2/httpd.conf" in all cases.
Rather than a FindBin style approach, an alternative might be to allow configuration values to be defined in terms of other configuration values?
tmpl_dir: $base_dir/templates
I'd also like a pony ;)
Catalyst::Plugin::ConfigLoader supports multiple overriding config files. If your Catalyst app is called MyApp, then it has three levels of override: 1) MyApp.pm can have a __PACKAGE__->config(...) directive, 2) it next looks for MyApp.yml in the main directory of the app, 3) it looks for MyApp_local.yml. Each level may override settings in each other level.
In a Catalyst app I built, I put all of my immutable settings in MyApp.pm, my debug settings in MyApp.yml, and my production settings in MyApp_<servertype>.yml and then symlinked MyApp_local.yml to point at MyApp_<servertype>.yml on each deployed server (they were all a little different...).
That way, all of my config was in SVN and I just needed one ln -s step to manually config a server.
Perl Best Practices warns against exactly what you want. It states that config files should be simple and avoid the sort of baroque features you desire. It goes on to recommend three modules (none of which are Core Perl): Config::General, Config::Std, and Config::Tiny.
The general rational behind this is that the editing of config files tends to be done by non-programmers and the more complicated you make your config files, the more likely they will screw them up.
All of that said, you might take a look at YAML. It provides a full featured, human readable*, serialization format. I believe the currently recommend parser in Perl is YAML::XS. If you do go this route I would suggest writing a configuration tool for end users to use instead of having them edit the files directly.
ETA: Based on Chris Dolan's answer it sounds like YAML is the way to go for you since Catalyst is already using it (.yml is the de facto extension for YAML files).
* I have heard complaints that blind people may have difficulty with it
YAML is hateful for config - it's not non-programmer friendly partly because yaml in pod is by definition broken as they're both white-space dependent in different ways. This addresses the main problem with Config::General. I've written some quite complicated config files with C::G in the past and it really keeps out of your way in terms of syntax requirements etc. Other than that, Chris' advice seems on the money.