I was able to install Informix in Centos7 without much of troubles. Now that everything is setup im attempting to follow a tutorial to create a DB space. The first step is checking whether the server is up and ready using oninit -v command. But this faila with error :
bad INFORMIXSERVER
yeah, very descriptive...
Can someone help me to troubleshoot this? There is a giant lack of information about Informix on Internet so I dont know where to begin.
Informix version : 12.10
Centos version : 7
Environment variables :
-bash-4.2$ echo $INFORMIXDIR
/opt/informix
-bash-4.2$ echo $INFORMIXSERVER
miServidor
-bash-4.2$
Regards!
If you want to check if the server is up and running, run "onstat -":
informix#irk:/data/informix/IBM/12.10.FC10/tmp$ echo $INFORMIXSERVER
irk1210
informix#irk:/data/informix/IBM/12.10.FC10/tmp$ onstat -
IBM Informix Dynamic Server Version 12.10.FC10 -- On-Line -- Up 18 days 02:39:28 -- 219948 Kbytes
informix#irk:/data/informix/IBM/12.10.FC10/tmp$
"oninit -v" will attempt to start the server.
"oninit -V" (capital V) will show the version of the oninit binary.
informix#irk:/data/informix/IBM/12.10.FC10/tmp$ oninit -V
IBM Informix Dynamic Server Version 12.10.FC10 Software Serial Number AAA#B000000
Mon Oct 23 12:55:56 CDT 2017
informix#irk:/data/informix/IBM/12.10.FC10/tmp$
Check that INFORMIXSERVER env variable is set. If not you will get the following errors from 'onstat' and 'oninit':
informix#irk:/data/informix/IBM/12.10.FC10/tmp$ unset INFORMIXSERVER
informix#irk:/data/informix/IBM/12.10.FC10/tmp$ oninit -v
bad INFORMIXSERVERinformix#irk:/data/informix/IBM/12.10.FC10/tmp$
informix#irk:/data/informix/IBM/12.10.FC10/tmp$ onstat -
shared memory not initialized for INFORMIXSERVER '<NULL>'
informix#irk:/data/informix/IBM/12.10.FC10/tmp$
Related
I have a problem when run command: sudo -su user_test ./pgsql/bin/initdb -D /example/folder
I had researched many sources from the internet but don’t found a solution.
I hope everyone could help me. Thanks.
Enviroment:
initdb (PostgreSQL) 10.10
OS: uname -a Linux DL2100 3.10.38 #1 SMP Build-gitb1820a8 x86_64 GNU/Linux
selecting default max_connections … 100
selecting default shared_buffers … 128MB
selecting default timezone … Europe/Helsinki
selecting dynamic shared memory implementation … posix
creating configuration files … ok
running bootstrap script … 2020-11-03 11:52:56.303 EET [3928] DEBUG: invoking IpcMemoryCreate(size=148545536)
2020-11-03 11:52:56.303 EET [3928] DEBUG: mmap(148897792) with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory
2020-11-03 11:52:56.315 EET [3928] DEBUG: SlruScanDirectory invoking callback on pg_notify/0000
2020-11-03 11:52:56.315 EET [3928] DEBUG: removing file "pg_notify/0000"
2020-11-03 11:52:56.316 EET [3928] DEBUG: dynamic shared memory system will support 288 segments
2020-11-03 11:52:56.316 EET [3928] DEBUG: created dynamic shared memory control segment 1852866650 (6928 bytes)
2020-11-03 11:52:56.319 EET [3928] PANIC: could not generate secret authorization token
Aborted
child process exited with exit code 134```
The error is thrown in BootStrapXLOG in src/backend/access/transam/xlog.c:
/*
* Generate a random nonce. This is used for authentication requests that
* will fail because the user does not exist. The nonce is used to create
* a genuine-looking password challenge for the non-existent user, in lieu
* of an actual stored password.
*/
if (!pg_backend_random(mock_auth_nonce, MOCK_AUTH_NONCE_LEN))
ereport(PANIC,
(errcode(ERRCODE_INTERNAL_ERROR),
errmsg("could not generate secret authorization token")));
src/backend/utils/misc/backend_random.c says:
pg_backend_random() function fills a buffer with random bytes. Normally,
it is just a thin wrapper around pg_strong_random(), but when compiled
with --disable-strong-random, we provide a built-in implementation.
So it seems that PostgreSQL was built on a system that had a source for strong random numbers (OpenSSL or /dev/urandom, if you are not on Windows), but the facility is not working on your current system.
try with the lates minor release of v10 (currently 10.15) – maybe a bug has been fixed.
run pg_config --configure to check if PostgreSQL was built --with-openssl
OpenSSL also uses /dev/urandom, so there is likely a problem with that source of random numbers; investigate there
If all fails, build PostgreSQL from source and configure it with
./configure --disable-strong-random ...
It worked fine. Thank you very much, #Laurenz Albe
On Centos, I ran into the following error:
sudo snmptrap -v 2c -c read localhost '' UPS-MIB::upsTraps
MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs
Cannot find module (UPS-MIB): At line 0 in (none)
UPS-MIB::upsTraps: Unknown Object Identifier
The above error happened after
Copied UPS-MIB.txt to /usr/share/snmp/mibs
I started snmptrapd:
snmptrapd -f -Lo -Dread-config -m ALL
The version of the Net-SNMP is 5.2.x.
The same procedures work fine with Ubuntu 18.04/Net-SNMP 5.3.7.
I wonder how to debug and fix the problem?
Besides the Net-SNMP version difference, on Ubuntu, I found an instruction to install mib-download-tool, and execute it after the installation of Net-SNMP, and comment out the lines beginning with min: in snmp.conf in order to fix the error of missing MIB's.
However, for the Centos, I had no need and found no such instruction, thus I have not done it yet, as there is no error message of missing MIB's.
The MIB file is downloaded from https://tools.ietf.org/rfc/rfc1628.txt
renamed to UPS-MIB.txt (It seems to me that the name of the MIB file does not matter, as long as it's unique? I tried to use a different names, upsMIB.txt, rfc1628.txt, but it does not help to improve.)
I solved the problem as follows:
manually copied /usr/share/snmp/mibs/ietf/UPS-MIB on an Ubuntu with Net-SNMP 5.7.3 installed to the Centos /usr/share/snmp/mibs/UPS-MIB
then restart the snmpd
by the command:
service snmpd restart
then the OID of UPS-MIB becomes visible and accessible.
Maybe, the version that I downloaded from https://tools.ietf.org/rfc/rfc1628.txt is not suitable??
I tried 30 day trial of dashDB Local. I followed the steps described in the link:
https://www.ibm.com/support/knowledgecenter/en/SS6NHC/com.ibm.swg.im.dashdb.doc/admin/linux_deploy.html
I did not create a node configuration file because mine is a SMP setup.
Logged into my docker hub account and pulled the image.
docker login -u xxx -p yyyyy
docker pull ibmdashdb/local:latest-linux
The pull took 5 minutes or so. I waited for the image download to complete.
Ran the following command. It completed successfully.
docker run -d -it --privileged=true --net=host --name=dashDB -v /mnt/clusterfs:/mnt/bludata0 -v /mnt/clusterfs:/mnt/blumeta0 ibmdashdb/local:latest-linux
ran logs command
docker logs --follow dashDB
This showed dashDB did not start but exited with error code 130
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0f008f8e413d ibmdashdb/local:latest-linux "/usr/sbin/init" 16 seconds ago Exited (130) 1 seconds ago dashDB
#
logs command shows this:
2017-05-17T17:48:11.285582000Z Detected virtualization docker.
2017-05-17T17:48:11.286078000Z Detected architecture x86-64.
2017-05-17T17:48:11.286481000Z
2017-05-17T17:48:11.294224000Z Welcome to dashDB Local!
2017-05-17T17:48:11.294621000Z
2017-05-17T17:48:11.295022000Z Set hostname to <orion>.
2017-05-17T17:48:11.547189000Z Cannot add dependency job for unit systemd-tmpfiles-clean.timer, ignoring: Unit is masked.
2017-05-17T17:48:11.547619000Z [ OK ] Reached target Timers.
<snip>
2017-05-17T17:48:13.361610000Z [ OK ] Started The entrypoint script for initializing dashDB local.
2017-05-17T17:48:19.729980000Z [100209.207731] start_dashDB_local.sh[161]: /usr/lib/dashDB_local_common_functions.sh: line 1816: /tmp/etc_profile-LOCAL.cfg: No such file or directory
2017-05-17T17:48:20.236127000Z [100209.713223] start_dashDB_local.sh[161]: The dashDB Local container's environment is not set up yet.
2017-05-17T17:48:20.275248000Z [ OK ] Stopped Create Volatile Files and Directories.
<snip>
2017-05-17T17:48:20.737471000Z Sending SIGTERM to remaining processes...
2017-05-17T17:48:20.840909000Z Sending SIGKILL to remaining processes...
2017-05-17T17:48:20.880537000Z Powering off.
So it looks like start_dashDB_local.sh is failing at /usr/lib/dashDB_local_common_functions.sh 1816th line? I exported the image and this is the 1816th line of dashDB_local_common_functions.sh
update_etc_profile()
{
local runtime_env=$1
local cfg_file
# Check if /etc/profile/dashdb_env.sh is already updated
grep -q BLUMETAHOME /etc/profile.d/dashdb_env.sh
if [ $? -eq 0 ]; then
return
fi
case "$runtime_env" in
"AWS" | "V1.5" ) cfg_file="/tmp/etc_profile-V15_AWS.cfg"
;;
"V2.0" ) cfg_file="/tmp/etc_profile-V20.cfg"
;;
"LOCAL" ) # dashDB Local Case and also the default
cfg_file="/tmp/etc_profile-LOCAL.cfg"
;;
*) logger_error "Invalid ${runtime_env} value"
return
;;
esac
I also see /tmp/etc_profile-LOCAL.cfg in the image. Did I miss any step here?
I also created /mnt/clusterfs/nodes file ... but it did not help. The same docker run command failed in the same way.
Please help.
I am using x86_64 Fedora25.
# docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-6.gitae7d637.fc25.x86_64
Go version: go1.7.4
Git commit: ae7d637/1.12.6
Built: Mon Jan 30 16:15:28 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-6.gitae7d637.fc25.x86_64
Go version: go1.7.4
Git commit: ae7d637/1.12.6
Built: Mon Jan 30 16:15:28 2017
OS/Arch: linux/amd64
#
# cat /etc/fedora-release
Fedora release 25 (Twenty Five)
# uname -r
4.10.15-200.fc25.x86_64
#
Thanks for bringing this to our attention. I reached out to our developer team. It seems this is happening because inside the container, tmpfs gets mounted on to /tmp and wipes out all the scripts
We have seen this issue and moving to the latest version of docker seems to fix it. Your docker version commands shows it is an older version.
So please install the latest docker version and retry the deployment of dashdb Local and update here.
Regards
Murali
I'm using python. I did a yum install memcached followed by a easy_install python-memcached
I used the simple test program from the Help(memcache). When I wasn't getting the proper answers I threw in some print statements:
[~/test]$ cat m2.py
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
x = mc.set("some_key", "Some value")
print 'Just set a key and value into the cache (suposedly)'
value = mc.get("some_key")
print 'Just retrieved that value from the cache using the key'
print 'X %s' % x
print 'Value %s' % value
[~/test]$ python m2.py
Just set a key and value into the cache (suposedly)
Just retrieved that value from the cache using the key
X 0
Value None
[~/test]$
The question now is, what have I failed to do in my installation? It appears to be working from an API perspective but it fails to put anything into the memcache share area.
I'm using a virtualbox vm running centos
[~]# cat /proc/version
Linux version 2.6.32-358.6.2.el6.i686 (mockbuild#c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 18:12:13 UTC 2013
Is there a daemon that is supposed to be running? I don't see an obvious named one when I do a ps.
I tried to get pylibmc installed on my vm but was unable to find a working installation so for now will see if I can get the above stuff working first.
I discovered if i ran straight from the python console GUI i get a bit more output if I set debug=1
>>> mc = memcache.Client(['127.0.0.1:11211'], debug=1)
>>> mc.stats
{}
>>> mc.set('test','value')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
0
>>> mc.get('test')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
When I try to use per the example telnet to connect to the port i get a connection refused:
[root#~]# telnet 127.0.0.1 11211
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root#~]#
I tried the instructions I found on the net for configuring telnet so localhost wouldn't be disabled:
vi /etc/xinetd.d/telnet
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
And then ran the commands to restart the service(s):
service iptables stop
service xinetd stop
service iptables start
service xinetd start
service iptables stop
I ran with both cases (iptables started and stopped) but it has no effect. So I am out of ideas. What do I need to do to make it so the PORT will be allowed? if that is the problem?
Or is there a memcached service that needs to be running that needs to open up the port ?
well this is what it took to get it working: ( a series of manual steps )
1) su -
cd /var/run
mkdir memcached # this was missing
In the memcached file I added "-l 127.0.0.1" to the OPTIONS statement. It's apparently a listen option. Do this for steps 2 & 3. I'm not certain which file is actually used at runtime.
2) cd /etc/sysconfig
cp memcached memcached.old
vi memcached
3) cd /etc/init.d
cp memcached memcached.old
vi memcached
4) Try some commands to see if the server starts now
/etc/init.d/memcached start
/etc/init.d/memcached status
/etc/init.d/memcached stop
/etc/init.d/memcached restart
I tried opening a browser, but it never seemed to actually display anything so I don't really know how valid this approach is. I'm not running apache or anything like this so perhaps its not relevant to my cause. Perhaps I would have to supply a ?key=blah or something.
5) http://127.0.0.1:11211
6) Now it should be ready to go. If one runs the test shown with the following it should work. At least it did for me. doing the help(memcache) will display a simple program. just paste that in and it should work just fine.
[~]$ python
>>> import memcache
>>> help(memcache)
I am running fabric to automate deployment. It is painfully slow.
My local environment:
(somenv)bob#sh ~/code/somenv/somenv/fabfile $ > uname -a
Darwin sh.local 12.4.0 Darwin Kernel Version 12.4.0: Wed May 1 17:57:12 PDT 2013; root:xnu-2050.24.15~1/RELEASE_X86_64 x86_64
My fab file:
#!/usr/bin/env python
import logging
import paramiko as ssh
from fabric.api import env, run
env.hosts = [ 'examplesite']
env.use_ssh_config = True
#env.forward_agent = True
logging.basicConfig(level=logging.INFO)
ssh.util.log_to_file('/tmp/paramiko.log')
def uptime():
run('uptime')
Here is the portion of the debug logs:
(somenv)bob#sh ~/code/somenv/somenv/fabfile $ > date;fab -f /Users/bob/code/somenv/somenv/fabfile/pefabfile.py uptime
Sun Aug 11 22:25:03 EDT 2013
[examplesite] Executing task 'uptime'
[examplesite] run: uptime
DEB [20130811-22:25:23.610] thr=1 paramiko.transport: starting thread (client mode): 0x13e4650L
INF [20130811-22:25:23.630] thr=1 paramiko.transport: Connected (version 2.0, client OpenSSH_5.9p1)
DEB [20130811-22:25:23.641] thr=1 paramiko.transport: kex algos:['ecdh-sha2-nistp256', 'ecdh-sha2-nistp384', 'ecdh-sha2-nistp521', 'diffie-hellman-grou
It takes 20 seconds before paramiko is even starting the thread. Surely, Executing task 'uptime' does not take that long. I can manually log in through ssh, type in uptime, and exit in 5-6 seconds. I'd appreciate any help on how to extract mode debug information. I made the changes mentioned here, but no difference.
Try:
env.disable_known_hosts = True
See:
https://github.com/paramiko/paramiko/pull/192
&
Slow public key authentication with paramiko
Maybe it is a problem with DNS resolution and/or IPv6.
A few things you can try:
replacing the server name by its IP address in env.hosts
disabling IPv6
use another DNS server (e.g. OpenDNS)
For anyone looking at this post-2014, paramiko, which was the slow component when checking known hosts, introduced a fix in March 2014 (v1.13), which was allowed as requirement by Fabric in v1.9.0, and backported to v1.8.4 and v1.7.4.
So, upgrade !