Bluemix: cf push using DEA instead of DIEGO architecture - ibm-cloud

When deploying an application into dedicated Bluemix it uses DEA architecture by default. How can I force it to use DIEGO architecture instead?

You have to use more steps. Deploy without start, switch to diego, start.
cf push APPLICATION_NAME --no-start
cf disable-diego APPLICATION_NAME
cf start APPLICATION_NAME
Ref Deploying Apps

I built a bash exec to do this, which will use your existing manifest.yml file and pack all of this into a single request. The contents of the bash exec follow:
#!/bin/bash
filename="manifest.yml"
if [ -e $filename ];
then
echo "using manifest.yml file in this directory"
else
echo "no manifest.yml file found. exiting"
exit -2
fi
shopt -s nocasematch
string='name:'
targetName=""
echo "Retrieving name from manifest file"
while read -r line
do
name="$line"
variable=${name%%:*}
if [[ $variable == *"name"* ]]
then
inBound=${name#*:}
targetName="$(echo -e "${inBound}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')"
fi
done < "$filename"
if [ "$targetName" == "" ];
then
echo "Could not find name of application in manifest.yml file. Cancelling build."
echo "application name is identified by the 'name: ' term in the manifest.yml file"
exit -1
else
echo "starting cf push for $targetName"
cf push --no-start
echo "cf enable-diego $targetName"
cf enable-diego $targetName
echo "cf start $targetName"
cf start $targetName
exit 0
fi
Just put this code into your editor as a new file and then make the file executable. I keep a copy of this exec in each of my repos in the root directory. After doing a copy-paste and executing this exec, you may get the following error:
/bin/bash^M: bad interpreter: No such file or directory
If you do, just run the dos2unix command and it will 'fix up' the line endings to match your os.

Related

How to configure Kubernetes initContainers runs after the other finished

I have a situation where I want to speed up deployment time by caching the git resources into a shared PVC.
This is the bash script I use for checkout the resource and save into a share PVC folder
#!/bin/bash
src="$1"
dir="$2"
echo "Check for existence of directory: $dir"
if [ -d "$dir" ]
then
echo "$dir found, no need to clone the git"
else
echo "$dir not found, clone $src into $dir"
mkdir -p $dir
chmod -R 777 $dir
git clone $src $dir
echo "cloned $dir"
fi
Given I have a Deployment with more than 1 pods and each of them have an initContainer. The problem with this approach is all initContainers will start almost at the same time.
They all check for the existence of the git resource directory. Let's say first deployment we dont have the git directory yet. Then it will create the directory, then clone the resource. Now, the second and third initContainers see that the directory is already there so they finish immediately.
Is there a way to make other initContainers wait for the first one to finish?
After reading the kubernetes documentation, I don't think it's supported by default
Edit 1:
The second solution I can think of is to deploy with 1 pod only, after a successful deployment, we will scale it out automatically. However I still don't know how to do this
I have found a workaround. The idea is to create a lock file, and write a script to wait until the lock file exists. In the initContainer, I prepare the script like this
#!/bin/bash
src="$1"
dir="$2"
echo "Check for existence of directory: $dir/src"
if [ -d "$dir/src" ]
then
echo "$dir/src found, check if .lock is exist"
until [ ! -f $dir/.lock ]
do
sleep 5
echo 'After 5 second, .lock is still there, I will check again'
done
echo "Finish clone in other init container, I can die now"
exit
else
echo "$dir not found, clone $src into $dir"
mkdir -p $dir/src
echo "create .lock, make my friends wait for me"
touch $dir/.lock
ls -la $dir
chmod -R 777 $dir
git clone $src $dir/src
echo "cloned $dir"
echo "remove .lock now"
rm $dir/.lock
fi
This kind of like a cheat, but it works. The script will make other initContainers wait until the .lock is remove. By then, the project is cloned already.

Spark Submit on Kuberentes exit code

How to check if the spark job is succeeded or failed programmatically, while running spark-submit. Usually the unix exit code is used.
phase: Failed
container status:
container name: spark-kubernetes-driver
container image: <regstry>/spark-py:spark3.2.1
container state: terminated
container started at: 2022-03-25T19:10:51Z
container finished at: 2022-03-25T19:10:57Z
exit code: 1
termination reason: Error
2022-03-25 15:10:58,457 INFO submit.LoggingPodStatusWatcherImpl: Application Postgres-Minio-Kubernetes.py with submission ID spark:postgres-minio-kubernetes-py-b70d3f7fc27829ec-driver finished
2022-03-25 15:10:58,465 INFO util.ShutdownHookManager: Shutdown hook called
2022-03-25 15:10:58,466 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-3321e67c-73d5-422d-a26d-642a0235cf23
Process failed and when I get the exit code in unix by echo $? it return a zero error code!
$ echo $?
0
The pod generated is also random way. What's the way the spark-submit is handled apart from using sparkonk8operator?
If you are using bash, one way to to grep on the output. You might have to grep on stderr or stdout depending on where the log output is being sent.
Something like this:
OUTPUT=`spark-submit ...`
if echo "$OUTPUT" | grep -q "exit code: 1"; then
exit 1
fi
In addition to things which #Rico mentioned, I have also considered the deployment mode of cluster and client with changing spark-submit shell file in $SPARK_HOME/bin directory as follows.
#!/usr/bin/env bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
if [ -z "${SPARK_HOME}" ]; then
source "$(dirname "$0")"/find-spark-home
fi
# disable randomized hash for string in Python 3.3+
export PYTHONHASHSEED=0
# check deployment mode.
if echo "$#" | grep -q "\-\-deploy-mode cluster";
then
echo "cluster mode..";
# temp log file for spark job.
export TMP_LOG="/tmp/spark-job-log-$(date '+%Y-%m-%d-%H-%M-%S').log";
exec "${SPARK_HOME}"/bin/spark-class org.apache.spark.deploy.SparkSubmit "$#" |& tee ${TMP_LOG};
# when exit code 1 is contained in spark log, then return exit 1.
if cat ${TMP_LOG} | grep -q "exit code: 1";
then
echo "exit code: 1";
rm -rf ${TMP_LOG};
exit 1;
else
echo "job succeeded.";
rm -rf ${TMP_LOG};
exit 0;
fi
elif echo "$#" | grep -q "\-\-conf spark.submit.deployMode=cluster";
then
echo "cluster mode..";
# temp log file for spark job.
export TMP_LOG="/tmp/spark-job-log-$(date '+%Y-%m-%d-%H-%M-%S').log";
exec "${SPARK_HOME}"/bin/spark-class org.apache.spark.deploy.SparkSubmit "$#" |& tee ${TMP_LOG};
# when exit code 1 is contained in spark log, then return exit 1.
if cat ${TMP_LOG} | grep -q "exit code: 1";
then
echo "exit code: 1";
rm -rf ${TMP_LOG};
exit 1;
else
echo "job succeeded.";
rm -rf ${TMP_LOG};
exit 0;
fi
else
echo "client mode..";
exec "${SPARK_HOME}"/bin/spark-class org.apache.spark.deploy.SparkSubmit "$#"
fi
Then, I have built and pushed my spark docker image.
See the following link for more details:
https://itnext.io/things-to-consider-to-submit-spark-jobs-on-kubernetes-766402c21716

Call Perl Script using Ansible

I have the below .sh code which need to get converted to Ansible tasks.
#!/bin/sh
echo "Installing Sonar"
SONAR_HOME=/tui/hybris/sonar
if [ ! -d "$SONAR_HOME" ]; then
mkdir -p $SONAR_HOME
fi
cd $SONAR_HOME
wget https://s3-eu-west-1.amazonaws.com/tuiuk/source/sonarqube/sonarqube-5.4.zip
unzip sonarqube-5.4.zip
echo "Modifying Sonar config file"
cd sonarqube-5.4/conf
perl -p -i -e 's/#sonar.jdbc.username=/sonar.jdbc.username=sonar/g' sonar.properties
perl -p -i -e 's/#sonar.jdbc.password=/sonar.jdbc.password=sonar/g' sonar.properties
perl -p -i -e 's/#sonar.jdbc.url=jdbc:mysql/sonar.jdbc.url=jdbc:mysql/g' sonar.properties
cd $SONAR_HOME
echo "downloading and copying plugins"
wget https://s3-eu-west-1.amazonaws.com/tuiuk/source/sonarqube/sonarqube5.4_plugins.zip
unzip sonarqube5.4_plugins.zip
cp plugins/* sonarqube-5.4/extensions/plugins/
cd sonarqube-5.4/bin/linux-x86-64
echo "Starting Sonar"
./sonar.sh start
Below is my task.I got stuck where I need to execute perl script. Could any of you help me in proceeding further.
- hosts: docker_test
tasks:
- name: Creates directory
file: path=/tui/hybris/sonar state=directory mode=0777
sudo: yes
- name: Installing Sonar
get_url:
url: "https://s3-eu-west-1.amazonaws.com/tuiuk/source/sonarqube/sonarqube-5.4.zip"
dest: "/tui/hybris/sonar/sonarqube-5.4.zip"
register: get_solr
- debug:
msg: "solr was downloaded"
when: get_solr|changed
- name: Unzip SonarQube
unarchive: src=/tui/hybris/sonar/sonarqube-5.4.zip dest=/tui/hybris/sonar copy=no
I bet you don't need perl here, use lineinfile with regex option (if you need to modify a single line in the file) or replace module (if you need to modify all occurrences).
Just call perl with command or shell-module:
- task: Modifying Sonar config file
shell: cd sonarqube-5.4/conf && perl -p -i -e ...

Restart Play Framework Activator using a script on mac & linux

I am trying to develop a script that can restart an activator instance running on a specified port. I normally run my activator project at port 15000 and I am aiming to have it restarted using the script. I can then later call that script from a web page to have activator restarted remotely etc.
So far I found a really handy utility in Linux called fuser which can find a process listening at a specified port and kill it. Something like:
fuser -k 15000/tcp which works fine on linux but NOT on a mac.
I guess I would also need to somehow track the activator project location to start it later.
Please let me know your suggestions and comments on how this can be done.
I'm using a bash file for this. It works on Linux and Mac OS.
It's named loader.sh and put into your distributions root.
To stop it uses the kill command and the PID stored in RUNNING_PID.
#!/bin/bash
# Change IP address and port here
address="127.0.0.1"
port="9000"
# Get directory and add it to PATH
dir="$( cd "$( dirname "$0" )" && pwd )"
export PATH="$dir:$dir/bin:$PATH"
function start() {
# Check if we started already
[ -f $dir/RUNNING_PID ] && return
echo -n "Starting"
# You can specify a config file with -Dconfig.resource
# or a secret with -Dplay.crypto.secret
myApp -Dhttp.port=$port -Dhttp.address=$address > /dev/null &
echo "...started"
}
function stop() {
[ -f $dir/RUNNING_PID ] || return
echo -n "Stopping"
kill -SIGTERM $(cat $dir/RUNNING_PID)
while [ -f $dir/RUNNING_PID ]
do
sleep 0.5
done
echo "...stopped"
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
*)
echo "Usage: loader.sh start|stop|restart"
exit 1
;;
esac

UIAutomation : Failed to authorize rights with status: -60007

So I am running UIAutomation on command line with
$ instruments -t /Developer/Platforms/iPhoneOS.platform/Developer/Library/Instruments/PlugIns/AutomationInstrument.bundle/Contents/Resources/Automation.tracetemplate
<path-to-your-app>/<appname>.app/ -e UIASCRIPT <path-to-your-js-test-file> -e
UIARESULTSPATH <path-to-results-folder>
This works fine and the simulator opens up, and the app runs, but gets stuck with this error.
Failed to authorize rights (0x2) with status: -60007
I believe it has something to do with the permissions.
How do I go about this ?
That's the answer I posted at Instruments via command line - jenkins
And here is even a blog post about Xcode command line authorization prompt error
I will explain it again here:
What I did was the following:
Mark jenkins user as admin (unfortunately it seems that there is no other way atm)
Go to /etc/authorization
search for key system.privilige.taskport
change value of allow-root to true
<key>system.privilege.taskport</key>
<dict>
<key>allow-root</key>
<false/> // change to -> <true>
<key>class</key>
<string>user</string>
<key>comment</key>
<string>Used by task_for_pid(...).
...
</dict>
Now I am being able to use jenkins to run my UIAutomation-Tests via Command Line Script
EDIT
To make jenkins recognize a successfull build, I have not a perfect solution but the following workaround:
...
echo "Run instruments simulator"
instruments -t "$ORDER_AUTOMATION_TEST_TEMPLATE_PATH" "$FILE_DEBUG_APP" -e UIASCRIPT "$ORDER_AUTOMATION_TESTSCRIPT_PATH" -e UIARESULTSPATH "$DIRECTORY_INSTRUMENTS_RESULT"
returnCode=0
if test -a "Run 1/Assertion failed.png"; then
echo "failed"
returnCode=1
else
echo "passed"
returnCode=0
fi
rm -fR "Run 1"
rm -fR "instrumentscli0.trace"
echo "Removing app dir"
echo "$FILE_APPLICATIONS"
rm -fR "$FILE_APPLICATIONS"
echo $returnCode
exit $returnCode
EDIT 2
Better way to check if automation test did run successfully:
# cleanup the tracefiles produced from instruments
rm -rf *.trace
##kill simulator afterwards
killall "iPhone Simulator"
##check if failures occured
# fail script if any failures have been generated
if [ `grep "<string>Error</string>" "$WORKSPACE/Automation Results/Run 1/Automation Results.plist" | wc -l` -gt 0 ]; then
echo 'Build Failed'
exit -1
else
echo 'Build Passed'
exit 0
fi
This can help on Mavericks and Yosemite: (based on Alexander's answer)
$ security authorizationdb write system.privilege.taskport allow