dashDB Local on fedora 25 - error code 130 - ibm-cloud

I tried 30 day trial of dashDB Local. I followed the steps described in the link:
https://www.ibm.com/support/knowledgecenter/en/SS6NHC/com.ibm.swg.im.dashdb.doc/admin/linux_deploy.html
I did not create a node configuration file because mine is a SMP setup.
Logged into my docker hub account and pulled the image.
docker login -u xxx -p yyyyy
docker pull ibmdashdb/local:latest-linux
The pull took 5 minutes or so. I waited for the image download to complete.
Ran the following command. It completed successfully.
docker run -d -it --privileged=true --net=host --name=dashDB -v /mnt/clusterfs:/mnt/bludata0 -v /mnt/clusterfs:/mnt/blumeta0 ibmdashdb/local:latest-linux
ran logs command
docker logs --follow dashDB
This showed dashDB did not start but exited with error code 130
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0f008f8e413d ibmdashdb/local:latest-linux "/usr/sbin/init" 16 seconds ago Exited (130) 1 seconds ago dashDB
#
logs command shows this:
2017-05-17T17:48:11.285582000Z Detected virtualization docker.
2017-05-17T17:48:11.286078000Z Detected architecture x86-64.
2017-05-17T17:48:11.286481000Z
2017-05-17T17:48:11.294224000Z Welcome to dashDB Local!
2017-05-17T17:48:11.294621000Z
2017-05-17T17:48:11.295022000Z Set hostname to <orion>.
2017-05-17T17:48:11.547189000Z Cannot add dependency job for unit systemd-tmpfiles-clean.timer, ignoring: Unit is masked.
2017-05-17T17:48:11.547619000Z [ OK ] Reached target Timers.
<snip>
2017-05-17T17:48:13.361610000Z [ OK ] Started The entrypoint script for initializing dashDB local.
2017-05-17T17:48:19.729980000Z [100209.207731] start_dashDB_local.sh[161]: /usr/lib/dashDB_local_common_functions.sh: line 1816: /tmp/etc_profile-LOCAL.cfg: No such file or directory
2017-05-17T17:48:20.236127000Z [100209.713223] start_dashDB_local.sh[161]: The dashDB Local container's environment is not set up yet.
2017-05-17T17:48:20.275248000Z [ OK ] Stopped Create Volatile Files and Directories.
<snip>
2017-05-17T17:48:20.737471000Z Sending SIGTERM to remaining processes...
2017-05-17T17:48:20.840909000Z Sending SIGKILL to remaining processes...
2017-05-17T17:48:20.880537000Z Powering off.
So it looks like start_dashDB_local.sh is failing at /usr/lib/dashDB_local_common_functions.sh 1816th line? I exported the image and this is the 1816th line of dashDB_local_common_functions.sh
update_etc_profile()
{
local runtime_env=$1
local cfg_file
# Check if /etc/profile/dashdb_env.sh is already updated
grep -q BLUMETAHOME /etc/profile.d/dashdb_env.sh
if [ $? -eq 0 ]; then
return
fi
case "$runtime_env" in
"AWS" | "V1.5" ) cfg_file="/tmp/etc_profile-V15_AWS.cfg"
;;
"V2.0" ) cfg_file="/tmp/etc_profile-V20.cfg"
;;
"LOCAL" ) # dashDB Local Case and also the default
cfg_file="/tmp/etc_profile-LOCAL.cfg"
;;
*) logger_error "Invalid ${runtime_env} value"
return
;;
esac
I also see /tmp/etc_profile-LOCAL.cfg in the image. Did I miss any step here?
I also created /mnt/clusterfs/nodes file ... but it did not help. The same docker run command failed in the same way.
Please help.
I am using x86_64 Fedora25.
# docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-6.gitae7d637.fc25.x86_64
Go version: go1.7.4
Git commit: ae7d637/1.12.6
Built: Mon Jan 30 16:15:28 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-6.gitae7d637.fc25.x86_64
Go version: go1.7.4
Git commit: ae7d637/1.12.6
Built: Mon Jan 30 16:15:28 2017
OS/Arch: linux/amd64
#
# cat /etc/fedora-release
Fedora release 25 (Twenty Five)
# uname -r
4.10.15-200.fc25.x86_64
#

Thanks for bringing this to our attention. I reached out to our developer team. It seems this is happening because inside the container, tmpfs gets mounted on to /tmp and wipes out all the scripts
We have seen this issue and moving to the latest version of docker seems to fix it. Your docker version commands shows it is an older version.
So please install the latest docker version and retry the deployment of dashdb Local and update here.
Regards
Murali

Related

How to connect python s3fs client to a running Minio docker container?

For test purposes, I'm trying to connect a module that intoduces an absration layer over s3fs with custom business logic.
It seems like I have trouble connecting the s3fs client to the Minio container.
Here's how I created the the container and attach the s3fs client (below describes how I validated the container is running properly)
import s3fs
import docker
client = docker.from_env()
container = client.containers.run('minio/minio',
"server /data --console-address ':9090'",
environment={
"MINIO_ACCESS_KEY": "minio",
"MINIO_SECRET_KEY": "minio123",
},
ports={
"9000/tcp": 9000,
"9090/tcp": 9090,
},
volumes={'/tmp/minio': {'bind': '/data', 'mode': 'rw'}},
detach=True)
container.reload() # why reload: https://github.com/docker/docker-py/issues/2681
fs = s3fs.S3FileSystem(
anon=False,
key='minio',
secret='minio123',
use_ssl=False,
client_kwargs={
'endpoint_url': "http://localhost:9000" # tried 127.0.0.1:9000 with no success
}
)
===========
>>> fs.ls('/')
[]
>>> fs.ls('/data')
Bucket doesnt exists exception
check that the container is running:
➜ ~ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
127e22c19a65 minio/minio "/usr/bin/docker-ent…" 56 seconds ago Up 55 seconds 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp hardcore_ride
check that the relevant volume is attached:
➜ ~ docker exec -it 127e22c19a65 bash
[root#127e22c19a65 /]# ls -l /data/
total 4
-rw-rw-r-- 1 1000 1000 4 Jan 11 16:02 foo.txt
[root#127e22c19a65 /]# exit
Since I proved the volume binding is working properly by shelling into the container, I expected to see the same results when attached the container's filesystem via the s3fs client.
What is the bucket name that was created as part of this setup?
From the docs I'm seeing you have to give <bucket_name>/<object_path> syntax to access the resources.
fs.ls('my-bucket')
['my-file.txt']
Also if you look at the docs below there are a couple of other ways to access it using fs.open can you give that a try?
https://buildmedia.readthedocs.org/media/pdf/s3fs/latest/s3fs.pdf

Cloud Code for VisualStudio Code Errors on Cloud Code: Deploy

I've been trying to setup Cloud Code with VSCode and I've been running in to problems when starting the deploy process with Cloud Code: Deploy.
I've tried deploying the samples, python-hello-world-1 as well as the go-hello-world-1, to my kubernetes cluster on GKE but always end up getting errors when the deploy process starts package downloading:
Go Output
Running: skaffold run --enable-rpc -v info --rpc-http-port 49869 --filename skaffold.yaml --default-repo gcr.io/abx-lernende
starting gRPC server on port 50051
starting gRPC HTTP server on port 49869
Using kubectl context: gke_abx-lernende_europe-west4-a_joshu-test-cluster
Generating tags...
- go-hello-world -> gcr.io/abx-lernende/go-hello-world:latest
Checking cache...
- go-hello-world: Not found. Building
Building [go-hello-world]...
Sending build context to Docker daemon 57.86kB
Step 1/8 : FROM golang:1.13
---> 6586e3d10e96
Step 2/8 : RUN go get -u -v github.com/go-delve/delve/cmd/dlv
---> Running in b75ce8e5dae9
[91mgithub.com/go-delve/delve (download)
[0m[91m# cd .; git clone -- https://github.com/go-delve/delve /go/src/github.com/go-delve/delve
Cloning into '/go/src/github.com/go-delve/delve'...
fatal: unable to access 'https://github.com/go-delve/delve/': Failed to connect to github.com port 443: Connection refused
package github.com/go-delve/delve/cmd/dlv: exit status 128
[0mfailed to build: build failed: building [go-hello-world]: build artifact: unable to stream build output: The command '/bin/sh -c go get -u -v github.com/go-delve/delve/cmd/dlv' returned a non-zero code: 1
Exited with code 1.
Python Output
Running: skaffold run --enable-rpc -v info --rpc-http-port 50185 --filename
skaffold.yaml --default-repo gcr.io/abx-lernende
starting gRPC server on port 50051
starting gRPC HTTP server on port 50185
Skaffold &{Version:v1.3.1 ConfigVersion:skaffold/v2alpha3 GitVersion: GitCommit:6ba887a42438d1da578a005cf550e618fee6dfb8 GitTreeState:clean BuildDate:2020-01-31T19:55:18Z GoVersion:go1.13.4 Compiler:gc Platform:windows/amd64}
Using kubectl context: gke_abx-lernende_europe-west4-a_joshu-test-cluster
Generating tags...
- python-hello-world -> Tags generated in 0s
gcr.io/abx-lernende/python-hello-world:latest
Checking cache...
- python-hello-world: Cache check complete in 6.0001ms
Not found. Building
Building [python-hello-world]...
Sending build context to Docker daemon 4.608kB
Step 1/7 : FROM python:3.8
---> efdecc2e377a
Step 2/7 : WORKDIR /app
---> Using cache
---> a131b81cad66
Step 3/7 : COPY requirements.txt .
---> Using cache
---> 4625ef1862bd
Step 4/7 : RUN pip install --trusted-host pypi.python.org -r requirements.txt
---> Running in 4da23a158ae3
[91mWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f17ba9c9d60>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/flask/
Im assuming this is due to me being behind a corporate proxy. As counter measures I have explicitly configured VSCode, Git, pip, go and google cloud sdk all to use said proxy. On top of that I set the Windows ENV variables for the proxy. sadly without success.
Thanks!
You can configure docker to pass through proxy information into the containers by adding something like the following to your ~/.docker/config.json:
{
"proxies": {
"default": {
"httpProxy": "http://192.168.1.12:3128",
"httpsProxy": "http://192.168.1.12:3128"
}
}
}
Docker will set the HTTP_PROXY/HTTPS_PROXY environment variables within the container which is picked up by many tools.

Is it possible to run Karate test in a pod? If possible, then how?

I just want to know whether I can run Karate test in a pod. Or is there any good suggestion on how to run it?
I tried to run the Karate test in terminal and it works. Just want to know if I can run it from Kubernetes pod. Nginx also running in the pod.
You can everything in pod whatever you are running outside environment. Pod run the container inside it.
So create the docker file and generate the docker image using docker file. Using that docker image and start the karate pod.
You can write the docker file like this
FROM maven:3-jdk-8-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY settings.xml /usr/share/maven/ref/
COPY pom.xml /tmp/pom.xml
COPY . /usr/src/app
RUN mvn -B -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml prepare-package -DskipTests
CMD ["/usr/src/app/maven_runner.sh"]
I found here one example : https://github.com/neillfontes/karate-sample
Posting as Community Wiki for future use.
#Harsh Manvar provided good example, however if you will just build it from Dockerfile, you will recieved errors. You have to download all files mentioned in Github. Correct oreder will be:
$ git clone https://github.com/neillfontes/karate-sample.git
$ cd karate-sample
$ docker build -t karate_docker .
After image was built you can check it:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
karate_docker latest 9dc6d7a5278a About a minute ago 136MB
Later you can start testing using:
$ docker run karate_docker
START: Running tests...
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running demo.DemoTest
11:57:49.684 [main] DEBUG c.i.karate.cucumber.CucumberRunner - init test class: class demo.DemoTest
11:57:50.412 [main] DEBUG c.i.karate.cucumber.CucumberRunner - loading feature: /usr/src/app/target/test-classes/demo/features/get-token.feature
11:57:50.663 [main] DEBUG c.i.karate.cucumber.CucumberRunner - loading feature: /usr/src/app/target/test-classes/demo/features/make-request.feature
11:57:53.898 [main] INFO com.intuit.karate.ScriptBridge - karate.env system property was: null
11:57:54.867 [main] DEBUG c.i.k.h.a.RequestLoggingInterceptor -
1 > POST http://brentertainment.com/oauth2/lockdin/token
1 > Accept-Encoding: gzip,deflate
1 > Connection: Keep-Alive
1 > Content-Length: 96

Informix oninit-v bad INFORMIXSERVER

I was able to install Informix in Centos7 without much of troubles. Now that everything is setup im attempting to follow a tutorial to create a DB space. The first step is checking whether the server is up and ready using oninit -v command. But this faila with error :
bad INFORMIXSERVER
yeah, very descriptive...
Can someone help me to troubleshoot this? There is a giant lack of information about Informix on Internet so I dont know where to begin.
Informix version : 12.10
Centos version : 7
Environment variables :
-bash-4.2$ echo $INFORMIXDIR
/opt/informix
-bash-4.2$ echo $INFORMIXSERVER
miServidor
-bash-4.2$
Regards!
If you want to check if the server is up and running, run "onstat -":
informix#irk:/data/informix/IBM/12.10.FC10/tmp$ echo $INFORMIXSERVER
irk1210
informix#irk:/data/informix/IBM/12.10.FC10/tmp$ onstat -
IBM Informix Dynamic Server Version 12.10.FC10 -- On-Line -- Up 18 days 02:39:28 -- 219948 Kbytes
informix#irk:/data/informix/IBM/12.10.FC10/tmp$
"oninit -v" will attempt to start the server.
"oninit -V" (capital V) will show the version of the oninit binary.
informix#irk:/data/informix/IBM/12.10.FC10/tmp$ oninit -V
IBM Informix Dynamic Server Version 12.10.FC10 Software Serial Number AAA#B000000
Mon Oct 23 12:55:56 CDT 2017
informix#irk:/data/informix/IBM/12.10.FC10/tmp$
Check that INFORMIXSERVER env variable is set. If not you will get the following errors from 'onstat' and 'oninit':
informix#irk:/data/informix/IBM/12.10.FC10/tmp$ unset INFORMIXSERVER
informix#irk:/data/informix/IBM/12.10.FC10/tmp$ oninit -v
bad INFORMIXSERVERinformix#irk:/data/informix/IBM/12.10.FC10/tmp$
informix#irk:/data/informix/IBM/12.10.FC10/tmp$ onstat -
shared memory not initialized for INFORMIXSERVER '<NULL>'
informix#irk:/data/informix/IBM/12.10.FC10/tmp$

Fabric take long time with ssh

I am running fabric to automate deployment. It is painfully slow.
My local environment:
(somenv)bob#sh ~/code/somenv/somenv/fabfile $ > uname -a
Darwin sh.local 12.4.0 Darwin Kernel Version 12.4.0: Wed May 1 17:57:12 PDT 2013; root:xnu-2050.24.15~1/RELEASE_X86_64 x86_64
My fab file:
#!/usr/bin/env python
import logging
import paramiko as ssh
from fabric.api import env, run
env.hosts = [ 'examplesite']
env.use_ssh_config = True
#env.forward_agent = True
logging.basicConfig(level=logging.INFO)
ssh.util.log_to_file('/tmp/paramiko.log')
def uptime():
run('uptime')
Here is the portion of the debug logs:
(somenv)bob#sh ~/code/somenv/somenv/fabfile $ > date;fab -f /Users/bob/code/somenv/somenv/fabfile/pefabfile.py uptime
Sun Aug 11 22:25:03 EDT 2013
[examplesite] Executing task 'uptime'
[examplesite] run: uptime
DEB [20130811-22:25:23.610] thr=1 paramiko.transport: starting thread (client mode): 0x13e4650L
INF [20130811-22:25:23.630] thr=1 paramiko.transport: Connected (version 2.0, client OpenSSH_5.9p1)
DEB [20130811-22:25:23.641] thr=1 paramiko.transport: kex algos:['ecdh-sha2-nistp256', 'ecdh-sha2-nistp384', 'ecdh-sha2-nistp521', 'diffie-hellman-grou
It takes 20 seconds before paramiko is even starting the thread. Surely, Executing task 'uptime' does not take that long. I can manually log in through ssh, type in uptime, and exit in 5-6 seconds. I'd appreciate any help on how to extract mode debug information. I made the changes mentioned here, but no difference.
Try:
env.disable_known_hosts = True
See:
https://github.com/paramiko/paramiko/pull/192
&
Slow public key authentication with paramiko
Maybe it is a problem with DNS resolution and/or IPv6.
A few things you can try:
replacing the server name by its IP address in env.hosts
disabling IPv6
use another DNS server (e.g. OpenDNS)
For anyone looking at this post-2014, paramiko, which was the slow component when checking known hosts, introduced a fix in March 2014 (v1.13), which was allowed as requirement by Fabric in v1.9.0, and backported to v1.8.4 and v1.7.4.
So, upgrade !