Problems when using Chapel 1.19 along with GASNet PSM (OmniPath) substrate - distributed-computing

After Changing to version 1.19, but using Omnipath implementation, I'm randomly receiving the following error: ERROR calling: gasnet_barrier_try(id, 0).
I know that the Omnipath implementation of GASNet is no longer supported by the current version of Chapel. However, I would like to use some features available only in version 1.19, and the cluster I use runs over an Omnipath network.
In order to use the PSM substrate (OmniPath), I proceed as suggested by Chapel's Gitter community:
export CHPL_GASNET_ALLOW_BAD_SUBSTRATE=true
wget https://gasnet.lbl.gov/download/GASNet-1.32.0.tar.gz
tar xzf GASNet-1.32.0.tar.gz
rm -rf $CHPL_HOME/third-party/gasnet/gasnet-src
mv GASNet-1.32.0 $CHPL_HOME/third-party/gasnet/gasnet-src
Then, I setup other variables:
export CHPL_COMM='gasnet'
export CHPL_LAUNCHER='gasnetrun_psm'
export CHPL_COMM_SUBSTRATE='psm'
export CHPL_GASNET_SEGMENT='everything'
export CHPL_TARGET_CPU='native'
export GASNET_PSM_SPAWNER='ssh'
export HFI_NO_CPUAFFINITY=1
Next, I build the runtime, etc.
However, when I run experiments, I randomly receive the following error:
ERROR calling: gasnet_barrier_try(id, 0)
at: comm-gasnet.c:1020
error: GASNET_ERR_BARRIER_MISMATCH (Barrier id's mismatched)
Which finishes the execution of the program.
I cannot find in GASNet documentation the reason for this error. I could only find a bit of information on GASNet's code.
Do you know what's the cause of this problem?
Thank you all.

I realize this is an old question, but for the record the current version of Chapel (1.28.0) now embeds a version of GASNet (GASNet-EX 2022.3.0 as of this writing) that provides CHPL_COMM=gasnet CHPL_COMM_SUBSTRATE=ofi (aka GASNet ofi-conduit) that provides high-quality support for Intel Omni-Path.
In particular, there should no longer be any reason to clobber Chapel's embedded version of GASNet-EX with an ancient/outdated GASNet-1 to get Omni-Path support, as suggested in the original question.
For more details see Chapel's detailed Omni-Path instructions.

Related

How to add changes for NO ENCRYPT DB2 option to db2RestoreStruct

I am trying to restore encrypted DB to non-encryped DB. I made changes by setting piDbEncOpts to SQL_ENCRYPT_DB_NO but still restore is being failed. Is there db2 sample code is there where I can check how to set "NO Encrypt" option in DB2. I am adding with below code snippet.
db2RestoreStruct->piDbEncOpts->encryptDb = SQL_ENCRYPT_DB_NO
The 'C' API named db2Restore will restore an encrypted-image to a unencrypted database , when used correctly.
You can use a modified version of IBM's samples files: dbrestore.sqc and related files, to see how to do it.
Depending on your 'C' compiler version and settings you might get a lot of warnings from IBM's code, because IBM does not appear to maintain the code of their samples as the years pass. However, you do not need to run IBM's sample code, you can study it to understand how to fix your own C code.
If installed, the samples component must match your Db2-server version+fixpack , and you must use the C include files that come with your Db2-server version+fixpack to get the relevant definitions.
The modifications to IBM's samples code include:
When using the db2Restore API ensure its first argument has a value that is compatible with your server Db2-version-and-fixpack to access the required functionality. If you specify the wrong version number for the first argument, for example a version of Db2 that did not support this functionality, then the API will fail. For example, on my Db2-LUW v11.1.4.6, I used the predefined db2Version1113 , like this:
db2Restore(db2Version1113, &restoreStruct, &sqlca);
When setting the restore iOptions field: enable the flag DB2RESTORE_NOENCRYPT, for example, in IBM's example, include the additional flag: restoreStruct.iOptions = DB2RESTORE_OFFLINE | DB2RESTORE_DB | DB2RESTORE_NODATALINK | DB2RESTORE_NOROLLFWD | DB2RESTORE_NOENCRYPT;
Ensure the restoredDbAlias differs from the encrypted-backup alias name.
I tested with Db2 v11.1.4.6 (db2Version1113 in the API) with gcc 9.3.
I also tested with Db2 v11.5 (db2Version11500 in the API) with gcc 9.3.

IBM Aspera get size of file before download

I am using Aspera Connect on mac to download files from a server. It works fine in terminal, but i was wondering if before i download a file, i could read its size first and then decide if i want to download it or not. I found the flag
'--precalculate-job-size'
but it's only doing that right before download and there's no way to stop the download.
The current command i use is this:
/Applications/Aspera\ Connect.app/Contents/Resources/./ascp -QT -l 200M -P33001 -i "/Applications/Aspera Connect.app/Contents/Resources/asperaweb_id_dsa.openssh" emp_ext3#fasp.ebi.ac.uk:/{asp_path} {local_path}
The resources for the flags are here:
https://download.asperasoft.com/download/docs/ascp/2.7/html/index.html
To answer your question, without going too much in the details:
If you want to display the size of an elements on an Aspera server for which you have access, you can use the command line "Amelia", see:
https://www.rubydoc.info/gems/asperalm
mlia server --url=ssh://fasp.ebi.ac.uk:33001 --username=emp_ext3 --ssh-keys=~/.aspera/mlia/aspera_bypass_dsa.pem br /10002/data/100_movie_gc.mrcs
there are plenty of options, like : --format=csv --fields=size
Note that this displays individual file sizes, but not recursive folder size.
a few other things:
You are not exactly using "Connect", but rather the "ascp" command line. Connect refers rather to the browser extension and lightweight app. while ascp is the implementation of Aspera FASP transfer protocol, found basically in all Aspera products.
the latest ascp documentation can be found here: https://www.ibm.com/support/knowledgecenter/SSL85S_3.9.6/hsts_admin_linux/dita/hsts_admin_linux_ascp_usage.html
did you know you can also use the free client:
https://downloads.asperasoft.com/en/downloads/2
it includes also ascp, but also a graphical user interface

Go Stackdriver debugger error loading program

I am trying to set up Stackdriver debugging using Go. Using the article and this great medium post I came up with this solution.
Key parts, in cloudbuild.yaml
- name: gcr.io/cloud-builders/wget
args: [
"-O",
"go-cloud-debug",
"https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug"
]
...
Dockerfile I have
...
COPY gopath/bin/stackdriver-demo /stackdriver-demo
ADD go-cloud-debug /
ADD source-context.json /
CMD ["/go-cloud-debug","-sourcecontext=./source-context.json", "-appmodule=go-errrep","-appversion=1.0","--","/stackdriver-demo"]
...
However the pods keeps crashing, the container logs show this error:
Error loading program: decoding dwarf section info at offset 0x0: too short
EDIT: Using https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug may be outdated as I haven't seen it used outside Daz's medium post. The official docs uses the package cloud.google.com/go/cmd/go-cloud-debug-agent
I have update cloudbuild.yaml file to install this package:
- name: 'gcr.io/cloud-builders/go'
args: ["get", "-u", "cloud.google.com/go/cmd/go-cloud-debug-agent"]
env: ['PROJECT_ROOT=github.com/roberson34/stackdriver-demo', 'CGO_ENABLED=0', 'GOOS=linux']
- name: 'gcr.io/cloud-builders/go'
args: ["install", "cloud.google.com/go/cmd/go-cloud-debug-agent"]
env: ['PROJECT_ROOT=github.com/roberson34/stackdriver-demo', 'CGO_ENABLED=0', 'GOOS=linux']
And in the Dockerfile I can get access to the binary in gopath/bin/go-cloud-debug-agent
When I execute the gopath/bin/go-cloud-debug-agent with my own program as an argument:
/go-cloud-debug-agent -sourcecontext=./source-context.json -appmodule=go-errrep -appversion=1.0 -- /stackdriver-demo
I get another opaque error:
Error loading program: AttrStmtList not present or not int64 for unit 88
So basically using the cloud-debug binary from https://storage.googleapis.com/cloud-debugger/compute-go/go-cloud-debug and cloud-debug-agent binary from the package cloud.google.com/go/cmd/go-cloud-debug-agent both don't work and give different errors.
Would appreciate any tips on what I'm doing wrong and how to fix it.
OK :-)
Yes, you should follow the current Stackdriver documentation, e.g. go-cloud-debug-agent
Unfortunately, there are now various issues with my post including a (currently broken) gcr.io/cloud-builders/kubectl for regions.
I think your issue pertains to your use of golang:alpine. Alpine uses musl rather than the glibc that you find on most other Linux distro's and so, you really must compile for Alpine to ensure your binaries reference the correct libc.
I'm able to get your solution working primarily by switching your Dockerfile to pull the Cloud Debug Agent while on Alpine and to compile your source on Alpine:
FROM golang:alpine
RUN apk add git
RUN go get -u cloud.google.com/go/cmd/go-cloud-debug-agent
ADD main.go src
RUN CGO_ENABLED=0 go build -gcflags=all='-N -l' src/main.go
ADD source-context.json /
CMD ["bin/go-cloud-debug-agent","-sourcecontext=/source-context.json", "-appmodule=stackdriver-demo","-appversion=1.0","--","main"]
I think that should get you beyond the errors that you documented and you should be able to deploy your container to Kubernetes.
I've made my version of your image publicly available (and will retain it for a few days for you):
gcr.io/dazwilkin-190402-55473323/roberson34#sha256:17cb45f1320e2fe04e0681310506f4c229896429192b0d1c2c8dc20ed54adb0d
You may wish to reference it (by that digest) in your deployment.yaml
NB For Error Reporting to be "interesting", your code needs to generate errors and, with your example, this is going to be challenging (usually a good thing). You may consider adding another errorful handler that always results in errors so that you may test the service.

Zend_Cache with Memcachier

Can I use memcachier with ZendFramework 1.12 ?
The provider that I am useing(AppFog) only offers Memcachier (Memcached is comming soon from 10 Months) And My app will need a lot of caching when it starts. I don't want to stick with APC so I have no other good alternative.
So this is just a half answer right now, I'll try to figure out the rest. I work for MemCachier by the way, so please shoot us an email on support#memcachier.com if you have more questions.
PHP includes two memcache bindings by default: memcache and memcached. The first one (memcache) is its own implementation of the memcache procotol, while the second one (memcached) is a php binding to the libmemcached C++ library.
The memcached binding for php does support SASL these days (since version 2.0.0). Sadly it isn't documented. It also is an optional part of the memcached module, so you'll need to make sure it is compiled on your machine (or AppFog) with the SASL support enabled. The steps to do that roughly are:
Install libmemcached. I used version 1.0.14.
Install php-memcached. Make sure to pass the "--enable-memcached-sasl" option to it when running ./configure.
When building both of these you can sanity check the output of "./configure" to make sure that SASL support is indeed enabled, sadly right now it can be tricky.
Edit you php.ini file. Put the following line into it:
[memcached]
memcached.use_sasl = 1
I did all of this on OSX 10.8 using homebrew. If that is the case for you the following should work:
$ brew install libmemcached
$ brew edit php54-memcached
// you'll need to add the line:
args << "--enable-memcached-sasl"
// to the brew file.
$ brew install php54-memcached
Now to actually use SASL support, here is a test file that demonstrates it and is a good sanity check.
<?php
/**
* Test of the PHP Memcached extension.
*/
error_reporting(E_ALL & ~E_NOTICE);
$use = ini_get("memcached.use_sasl");
$have = Memcached::HAVE_SASL;
echo "Have SASL? $have\n";
echo "Using SASL? $use\n\n";
$mc = new Memcached();
$mc->setOption(Memcached::OPT_BINARY_PROTOCOL, true);
$mc->setSaslAuthData("user-1", "pass");
$mc->addServer("localhost", 11211);
$mc->set("foo", "Hello!");
$mc->set("bar", "Memcached...");
$arr = array(
$mc->get("foo"),
$mc->get("bar")
);
var_dump($arr);
?>
Adapting this to work in the Zend Framework is unknown to me right now. I'm not familiar with it so may take me some time to install and figure out. It seems very doable though given one of the backends works with SASL auth.

Issue with Informix (ifx_connect)

Hi I have a problem after installed informix client sdk (Ref : http://www.debian-administration.org/article/651/Connect_to_Informix_using_PHP5_on_Lenny_x86_64)
OS : CentOS
Here is the .php file that i use to connect
$db_conn = ifx_connect("dbname#IPHost","user","pass");
There is some error here,
Warning: ifx_connect() [function.ifx-connect]: E [SQLSTATE=IX 001 SQLCODE=-1829] in /var/www/html/index.php on line 5
is anyone know the solution ?
Thanks
The way you find more about errors from Informix is often:
$ finderr -1829
-1829 Cannot open file citoxmsg.pam.
The file citoxmsg.pam is missing from the directory $INFORMIXDIR/msg.
If this error occurs, note all circumstances and contact IBM Technical Support.
$
(Give or take some blank lines.) The finderr command is found in $INFORMIXDIR/bin. You need $INFORMIXDIR set in the environment unless /usr/informix is correct - it could be a symlink to the actual software directory.
There are two possibilities:
You have not got INFORMIXDIR set in the environment when PHP is run, and/or the php.ini file does not define a value for $INFORMIXDIR, or the value is set incorrectly, or a default (quite possibly /usr/informix) is being used but the software is not installed there.
The installation is not complete - the relevant message file is missing as noted.
Of the two, I think reason 1 is much the more likely.
The IX001 value for SQLSTATE is of minimal use - it is the generic 'something went wrong with Informix' message. The SQLCODE is much more significant and helpful.