We have a NodeJS Cloud Foundry application with a DevOps Delivery Pipeline enabled. We are attempting to update our deploy script to allow us to deploy app updates without any downtime. We now have a script that mostly works (see below).
However, we realize that during the deployment, our app will start twice. What do we need to change in the script so only one server initialization will occur?
Here is the script:
#!/bin/bash
# Push app
if ! cf app $CF_APP; then
cf set-env "${CF_APP}" NODE_ENV development
cf set-env "${CF_APP}" HOST_NAME bluemix
cf push $CF_APP
else
OLD_CF_APP=${CF_APP}-OLD-$(date +"%s")
rollback() {
set +e
if cf app $OLD_CF_APP; then
cf logs $CF_APP --recent
cf delete $CF_APP -f
cf rename $OLD_CF_APP $CF_APP
fi
exit 1
}
set -e
trap rollback ERR
cf rename $CF_APP $OLD_CF_APP
cf push $CF_APP
cf set-env "${CF_APP}" NODE_ENV development
cf set-env "${CF_APP}" HOST_NAME bluemix
cf restage ${CF_APP}
cf delete $OLD_CF_APP -f
fi
I would suggest to take a broader look and consider what is called "blue green deployment". Basically, you start another app instance and then switch over from the old to the new code version.
There are different approaches you can take to such a High Availability deployment with Cloud Foundry apps on IBM Cloud. There are Cloud Foundry CLI plugins such as "autopilot" or "blue-green-deploy" that can be used. Their goal is to have zero downtime deployments. You can also take them as input to come up with your own scripting.
Related
I have setup the Confluent Platform in my local machine using Windows Subsystem for Linux. Confluent Platform is working fine and I tested it's local environment (for development only) using the local command like: confluent local services start (which will start all the services like zookeeper, kafka, connect, and the control-center also).
Now I want to test it on production environment (simply means not using local command). Basically, I want to run and use the same command like in local environment without local command.
I login using confluent login with credentials My Mail ID and Password which I created on Confluent's website.
Now I am getting these commands after confluent
yes#robin:/mnt/c/Users/robin$ confluent
Manage your Confluent Cloud.
Usage:
confluent [command]
Available Commands:
admin Perform administrative tasks for the current organization.
api-key Manage the API keys.
audit-log Manage audit log configuration.
cloud-signup Sign up for Confluent Cloud.
completion Print shell completion code.
connect Manage Kafka Connect.
context Manage CLI configuration contexts.
environment Manage and select Confluent Cloud environments.
help Help about any command
iam Manage RBAC and IAM permissions.
kafka Manage Apache Kafka.
ksql Manage ksqlDB.
local Manage a local Confluent Platform development environment.
login Log in to Confluent Cloud or Confluent Platform.
logout Log out of Confluent Cloud.
price See Confluent Cloud pricing information.
prompt Add Confluent CLI context to your terminal prompt.
schema-registry Manage Schema Registry.
shell Start an interactive shell.
version Show version of the Confluent CLI.
Flags:
--version Show version of the Confluent CLI.
-h, --help Show help for this command.
-v, --verbose count Increase verbosity (-v for warn, -vv for info, -vvv for debug, -vvvv for trace).
Now
I am not able to locate the control-center.
Can someone help if this is the correct way to login into Confluent Platform on-premise OR I have to use some other command to start all the services including control-center.
Thanks, Robin.
You should be able to open a browser to http://localhost:9021 . You shouldn't need to login or use the CLI after confluent local services start; however, this may not work in WSL2, as you need to forward a network port for your Windows OS browser to reach this environment.
The local quickstart guide should be followed instead, which uses Docker Compose (install Docker for Windows and enable WSL2 integration in its settings). Then a network port forward will be available without extra configuration.
gcloud beta run deploy used to work but now I'm getting an error:
$ gcloud beta run deploy $PROJECT --image $IMAGE_NAME --platform=managed --region us-central1 --project $PROJECT --add-cloudsql-instances $PROJECT-db
...
DONE
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
abcdefj-higj-lmnopquer-uvw-xyz 2019-06-29T13:59:07+00:00 1M4S gs://$PROJECT_cloudbuild/source/XYZ123.96-aae829d50a2e43a29dce44d1f93bafbc.tgz gcr.io/$PROJECT/$PROJECT (+1 more) SUCCESS
API [sql-component.googleapis.com] not enabled on project
[$PROJECT]. Would you like to enable and retry (this will take a
few minutes)? (y/N)? y
Enabling service [sql-component.googleapis.com] on project [$PROJECT]...
ERROR: (gcloud.beta.run.deploy) INVALID_ARGUMENT: Invalid operation name operations/noop.DONE_OPERATION, refers to an already DONE operation
I've checked the APIs from the console, both Cloud SQL Admin and Cloud SQL APIs are enabled. I've also tried disabling them and run the deploy command again, but to no avail.
More info:
The SQL server instance is part of the same project. Changing the --add-cloudsql-instances parameter to the connection name ($PROJECT:$REGION:$SQLNAME) has no effect
Manually enabling the server has no effect: gcloud services enable sql-component.googleapis.com --project XXX
Removing the --add-cloudsql-instances parameter and the server deploys successfully.
This works: gcloud sql connect $PROJECTDB --user=root --quiet
# NOTE: ($PROJECTDB) is the same parameter as --add-cloudsql-instances above
There seems to be a bug in gcloud v253.0.0 when deploying deploy Cloud Run services with Cloud SQL instances (requires Gmail log-in).
Once I downgraded to gcloud v251.0.0, I got rid of the "API [sql-component.googleapis.com] not enabled" error message and was able to deploy Cloud Run services with Cloud SQL instances again.
$ gcloud components update --version 251.0.0
UPDATE, July 17, 2019: The issue is fixed in Cloud SDK 254.0.0. If you upgrade to the latest version now, deploying Cloud Run services with Cloud SQL instances should work:
$ gcloud components update
For this problem there were two issues:
Enabling API services. I recommend enabling services prior to running Cloud Run deploy as this can take longer than Cloud Run might allow. Run this command first: gcloud services enable sql-component.googleapis.com
The Cloud SQL connection name was incorrect. Specifying the correct name helps.
The format of the Cloud SQL connection name is: $PROJECT:$REGION:$GCP_SQL_NAME.
Example: development-123456:us-central1:mysqldb
This command will return information about the Cloud SQL instance including the connection name:
gcloud sql instances describe <instance_name>
Note. Cloud Run has several commands for specifying the Cloud SQL instance to attach.
--add-cloudsql-instances - This option appends the specified connection name.
--set-cloudsql-instances - This option replaces the current Cloud SQL connection name.
If you are not deploying a new version to Cloud Run, it is not necessary to use the --add-cloudsql-instances option as the value persists. I prefer to use the --set-cloudsql-instances option to clearly specify the Cloud SQL instances.
Cloud Run supports multiple Cloud SQL instances. You can have add more than one connection name.
I have created a simple node express MongoDB app which has 3 API endpoints to perform basic crud operations.
If I was to deploy this to Heroku as a service and use bitbucket-pipeline to perform CI-CD this would do the job for me. On top of this, I can have Heroku pipelines to have multiple stages of environments like dev and production.
And after doing all above I would be done with my pipeline and happy about it.
Now coming back to Serverless, I have deployed my API endpoints to AWS as lambda functions, And that is the only environment (let's say DEV) present at the moment.
Now how can I achieve a pipeline similar to the one mentioned earlier in a serverless architecture?
All the solutions out there do not suggest (maybe I missed some) promoting the actual code which is tried and tested on dev env to Production. But rather a deploy a new set of code, is this a limitation?
Option 1
Presuming that you are developing a Node Serverless application, deploying a new set of code with the same git commit ID and package-lock.json/yarn.lock should result in the same environment. This can be achieved by executing multiple deploy commands to different stages e.g.
sls deploy -s dev
sls deploy -s prod
There are various factors that may cause the deployed environments to be different, but the risk of that should be very low. This is the simplest CI/CD solution you can implement.
Option 2
If you'd like to avoid the risk from Option 1 at all cost, you can split the package and deployment phase in your pipeline. Create the package before you deploy from the codebase that you have checked out:
sls package -s dev --package build/dev
sls package -s prod --package build/prod
Archive as necessary, then to deploy:
sls deploy -s dev --package build/dev
sls deploy -s prod --package build/prod
Option 3
This is an improved version of Option 2. I have not tried this solution but it should theoretically be possible. The problem with Option 2 is that you have to execute the package command multiple times which might not be desirable YMMV. To avoid the need of packaging more than once, first create the package:
sls package -s dev --package build
Then to deploy:
# Execute a script to modify build/cloudformation-template-update-stack.json to match dev environment
sls deploy -s dev --package build
# Execute a script to modify build/cloudformation-template-update-stack.json to match prod environment
sls deploy -s prod --package build
If you have the following resource in build/cloudformation-template-update-stack.json for example:
"MyBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "myapp-dev-bucket"
}
},
The result of the script you execute before sls deploy should modify the CF resource to:
"MyBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "myapp-prod-bucket"
}
},
This option of course will imply that you can't have any hardcoded resource name in your app, every resource names must be injected from serverless.yml to your Lambdas.
I'm using an IBM devops pipeline based on the Cloud Foundry template. The template gives you Blue-Green deployments.
My stage deploy script looks like this:
#!/bin/bash
cat << EOF > ${WORKSPACE}/manifest.yml
declared-services:
my_cloudant:
label: cloudantNoSQLDB
plan: Lite
my_messagehub:
label: messagehub
plan: standard
my_autoscaling:
label: Auto-Scaling
plan: free
my_availability_monitoring:
label: AvailabilityMonitoring
plan: Lite
applications:
- name: movie-recommend-demo
host: movie-recommend-demo
buildpack: https://github.com/cloudfoundry/python-buildpack.git#v1.5.18
memory: 128M
instances: 2
path: web_app
services:
- my_cloudant
- my_messagehub
- my_autoscaling
- my_availability_monitoring
timeout: 180
env:
# these are set in the devops stage ENVIRONMENT PROPERTIES
BI_HIVE_USERNAME: ${BI_HIVE_USERNAME}
BI_HIVE_PASSWORD: ${BI_HIVE_PASSWORD}
BI_HIVE_HOSTNAME: ${BI_HIVE_HOSTNAME}
EOF
# Push app
if ! cf app $CF_APP; then
cf push $CF_APP
else
OLD_CF_APP=${CF_APP}-OLD-$(date +"%s")
rollback() {
set +e
if cf app $OLD_CF_APP; then
cf logs $CF_APP --recent
cf delete $CF_APP -f
cf rename $OLD_CF_APP $CF_APP
fi
exit 1
}
set -e
trap rollback ERR
cf rename $CF_APP $OLD_CF_APP
cf push $CF_APP
cf delete $OLD_CF_APP -f
fi
# TODO:
# - Reconfigure Availability Monitoring on Green deployment
# - Reconfigure Autoscaling on Green deployment (https://console.bluemix.net/docs/cli/plugins/auto-scaling/index.html)
# Export app name and URL for use in later Pipeline jobs
export CF_APP_NAME="$CF_APP"
export APP_URL=http://$(cf app $CF_APP_NAME | grep urls: | awk '{print $2}')
# View logs
#cf logs "${CF_APP}" --recent
Before setting up and running the stage, I had availability monitoring setup on my cloud foundry app. Running the stage has caused my availability monitoring configuration to be deleted.
How can I automatically reconfigure the availability monitoring in the new 'green' deployment with the script?
I had a similar question for Auto Scaling, but there appears to be an API/CLI that I can use to reconfigure that service. However, I ran into a problem using cf oauth-token
This is a current deficiency in the service that is actively being worked and should be available later this year.
For now, the way to keep the configuration is to not delete the app but rather reuse 2 apps. This can become somewhat confusing as to which one has the tests even if you only bind the service to one app especially if you use the monitoring tab.
What we do when selfmonitoring is create a dummy app in the space and bind the service to it (it doesn't need to even be running). We then use it to monitor the blue/green app(s). Here we also don't delete the app but just reuse the apps.
The Backup feature in the developer console for creating backups is great. I would however like the possibility to automate this. Is there a way to do so from the cf command line app?
Thanks
It's not possible from the cf cli, but there's an API endpoint for triggering backups.
API Docs | Custom Extensions | Swisscom Application Cloud Filter for
Cloud Foundry (CF) Cloud Controller (CC) API. Implements Swisscom
proprietary extensions
POST /custom/service_instances/{service-instance-id}/backups
Creates a backup for a given service instance
See for more Info Service Backup and Restore in docs.developer.swisscom.com
Create Backup To create a backup, navigate to the service instance in
the web console and then to the “Backups” tab. There you can click the
“Create” button to trigger a manual backup.
Note: Backups have to be triggered manually from the web console.
Be aware that you can only keep a set number of backups per service
instance. The actual number is dependent on the service type and
service plan. In case you already have the maximum number, you cannot
create any new backups before deleting one of the existing.
It may take several minutes to backup your service (depending on the
size of your service instance).
Restore Backup You can restore any backup at any time. The current
state of your backup will be overwritten and replaced with the state
saved to the backup. You are advised to create a backup of the current
state before restoring an old state.
Limitations You can only perform one backup or restore action per
service instance at a time. If an action is still ongoing, you cannot
trigger another one. You cannot exceed the maxmimum number of backups
per service instance
We did this by developing a small Node.js application which is running on the cloud in the same space and which backups our maria and mongo db every night automatically.
EDIT:
You can download the code from here:
https://github.com/theonlyandone/cf-backup-app
Fresh from the press: Swisscom Application Cloud cf CLI Plugin can also automate backup and restore.
The official cf CLI plugin for the Swisscom Application Cloud gives
you access to all the additional features of the App Cloud.
cf install-plugin -r CF-Community "Swisscom Application Cloud"
from 0.1.0 release notes
Service Instance Backups
Add cf backups command (list all backups of a service instance)
Add cf create-backup command (create a new backup of a service instance)
Add cf restore-backup command (restore an existing backup of a service instance)
Add cf delete-backup command (delete an existing backup of a service instance)
Despite the answer from Matthias Winzeler saying it's not possible, in fact it's totally possible to automate MariaDB backups through the command line.
I developed a plugin for the CF CLI:
https://github.com/gsmachado/cf-mariadb-backup-plugin
In future I could extend such plugin to backup any kind of service that is supported by the Cloud Foundry Provider's API (in this case, Swisscom AppCloud API).