Google Cloud Container Builder from Cloud Source Repository - google-cloud-storage

Is it possible to build a docker container using Google Cloud Container Builder from source code in Google Cloud Source Repository?
The docs say the code must be in Cloud Storage so I assume the answer is no but this seems crazy. Am I missing something? Is code in Google Source Code accessible via Cloud Storage?

This feature is now publicly supported, see API docs at https://cloud.google.com/container-builder/docs/api/reference/rest/v1/projects.builds#RepoSource
Let us know if you have any problems or feature requests or use cases this doesn't cover.

Unfortunately the API is fairly low level, and so building from GSR or other source control systems directly is not currently possible.
However it is possible to write a service which can watch for source code changes from your favorite SCM, copy that source into a GCS bucket (handling your SCM auth as necessary) and then trigger the Container Builder API to build an image.
Google is running an Alpha program for additional tools that are built on top of this API. Those who are interested are encouraged to sign up here.

Related

How to deploy code to IBM Cloud using DevOps approach?

Can somebody bring some light in IBM Cloud deployments tools / platforms whatever?
I am new to it, so I am looking at their docs, watching videos and still i am confused.
What I want to achieve is typical scenario fetch code from repo, build it, test it, deploy it to cloud. I found strategies / platforms how to achieve that and i still can't see differences advantages / disadvantages between them.
So we have:
toolchain
cloud foundry
code engine
Continuous Delivery (service)
and maybe something more? :)
I am looking at Cloud Foundry explained video and the guy is saying if you want to do not care about the bottom part like networking, security, containers you can choose deploy using K8S service. Wtf? So from total automatic thing you can now handle something in the cloud foundry by yourself. So for me its total mix of everything together and i don't know now which tool / platform / strategy to use.
Any comment is appreciated.
It all depends on your requirements.
IBM DevOps/Continuous Delivery/Toolchanis is set of services that can build and deploy your application to given runtime. You can find various tutorials here - https://www.ibm.com/cloud/architecture/courses/toolchain-tutorials. These tutorials shows you various things that you can embed in your build pipelines (like code scanning with CRA, image scanning, signing etc)
These runtimes can be different depending on your requirements:
CloudFoundry, where you deploy app using a buildpak, but this is rather fading technology, so I wouldn't recommend that
as docker image in K8s/OpenShift cluster - use this if your organization is planning or already utilizing Docker/Kubernetes/OpenShift. You will need to create K8s/OpenShift cluster first.
as serverless app, using the IBM Code Engine
If you are just starting and just want to deploy simple, single app to the Cloud I'd consider using IBM Code Engine and not investigate Toolchains for now. Check basic demo here - How to deploy source code with IBM Cloud Code Engine

The best way to deploy/redeploy PHP code from github to GCP Compute Engine LAMP Stack [Google Click to Deploy]

overflowers!
Can someone please advice me on the best way to continuously deploy PHP code from github to GCP Compute Engine? Specifically to GCP Marketplace LAMP Stack, which is the Google Click to Deploy VM? Here is the link to the market place
Your advice is greatly appreciated!
Click to Deploy (C2D) is an excellent way to test drive solutions but I'm (admittedly somewhat naive but) skeptical that it's a good approach to combine C2D with customization.
That said, the C2D solutions are published and you could, with some work, customize the solution as the basis for your own solution.
In other words, I'd recommend not combining the C2D as-is but to customize the tools that it uses (!) for your needs.
The README explains how the LAMP VM is built (Cloud Build, packer, chef).
Without wishing to in any way impugn your approach, please consider alternative ways to deploy PHP to Google Cloud Platform. Running Apache and MySQL on a VM may be entirely appropriate for your needs but you will need to maintain the OS, Apache, MySQL etc.
If you're goal is to deploy a PHP (web) app that needs a MySQL-compliant database and you want to be more "cloud native", you could consider using:
App Engine or Cloud Run to host your PHP app (see link)
Cloud SQL for the database (see link)
The above would require more initial work but, if you want more flexibility, resilience and less "chore", I think you'd benefit from the investment.
In addition opening up the app like this would facilitate leveraging Cloud Monitoring, Logging, Debugger etc

How do I deploy .BNA file on IBM cloud blockchain 2.0 resource?

I am trying to set up a rest API that is connected to an IBM blockchain resource. I have developed a model file, logic file, and acl file.
I have it all packed up in a nice tidy .BNA, and now i would like to deploy it to a channel of my IBM cloud blockchain 2.0 resource, running on a free kubernetes cluster.
Everything on the cloud blockchain resource is set up perfectly, and all orgs, peers, orderers, msps, and CAs are set up correctly. The channel is set up properly, and has nodes and an MSP connected. I have all the admin cred .jsons
The channel only accepts smart contract files, so I tried repackaging the files (logic.js, permissions.acl, and model.cto) by putting them in a contract folder, and using the IBM Blockchain vsCode plugin to package them as a smart contract, but trying to install on the IBM cloud crashes the browser.
I am thinking maybe I have to remote connect into the IBM kubernetes cluster that the blockchain resource is sitting on, and use the hyperledger composer CLI to install the .BNA
Seems very unintuitive, but thats the one thing I can think to try while I wait for this question to get answered.
I expected to just be able to install the .BNA as a smart contract, like a .cds.
In August 2018, IBM announced that we are no longer investing in Hyperledger Composer, and instead focusing 100% on Hyperledger Fabric. As a result, IBM Blockchain Platform v2.0 will not provide any support or tooling around Hyperledger Composer.
The good news is that we've significantly invested in the programming model (APIs and SDKs) used to write smart contracts and applications in Fabric v1.4, and we've also released some great developer tooling in the form of an extension for Visual Studio Code: https://marketplace.visualstudio.com/items?itemName=IBMBlockchain.ibm-blockchain-platform
The extension offers an extensive set of capabilities for writing smart contracts - with tooling for creating new projects, packaging them, deploying them, testing them, and debugging them - all from within one of the most popular IDEs around.
To get started - just install Visual Studio Code, and then the IBM Blockchain Platform extension (there are a few prereqs, check the README first). After that, you will be presented with a homepage that links you to tutorials and samples to help you get started.
For the first one, I can't really suggest a solution. At best, try installing and using the composer CLI and the latest version to make the bna file. Composer playground isn't maintained as well imo.
For the second part, in the connection.json file and docker there will be a bunch of IP addresses that look something like localhost:7040 and so on for the CA, orderer, org and peer. You will need to replace these using the IPs given by IBM. The examples on github that demonstrate integration are to do with nodejs SDK and not composer, however you can refer to https://github.com/IBM-Blockchain/vehicle-manufacture to get the idea.
This link is the only I could find for Hyperledger Composer and IBM platform.
(comments were getting too long to fit)

What's the anatomy of a Bluemix/Cloud Foundry node red project?

There's lots of documentation and a kludgy console to set up continuous deployment in Cloud Foundry, but I haven't found any documentation on what the artifacts inside a repository need to be.
I don't want to cut-n-paste flows from the node red editor. If that's the only way, then IBM is not ready for prime time. I also am aware of most everything about my flows being in the Cloudant nodered db.
A node red application is more than the flows though. What about my _design docs for my dbs?
I need device info and other stuff from the Watson console, Cloudant info and my flows packaged up into something deployable.
Has anyone scripted this?
What I mean by this is I can clone a Docker project, an npm project and all sorts of projects that implement a build->test->push mechanism. They employ a configuration script of some sort (e.g. package.json) and contain a bunch of source files for the actual application, test scripts, db scripts, whatever is necessary to deploy the application and its environment into a host. I see lots of documentation on the toolchain and its features, but I'm not clear on if it's possible to make use of it for my hosted node red application. Or if I have to write the scripting mechanisms to offload flow info from the nodered db and query all my other dbs for their respective _design docs and all the other configuration information required to set up an IoT node red application.
I forgot to mention, the copy/paste method loses information; you get no tab level metadata. The only way to get all the flow stuff is to pull if from the nodered flow record.
Node-RED will release a new version in a couple of days that will introduce projects, so you'll be able to use GitHub and all the usual tools to handle your app: https://twitter.com/NodeRED/status/956934949784956931 and https://nodered.org/docs/user-guide/projects/
While it doesn't address your short-term needs, I think it's the best long-term solution. Hopefully that helps.

Azure deployment versions

I will try to make it simplify. I am using windows azure cloud to host our web services and databases. and these web services are accessible via URL: "https://server.mydomain.com"
now we made a few major changes to our model and hence web services as a whole. This breaks the API interface for older users. Now we want to deploy the latest version on URL: "https://server.mydomain.com/v2" so that old users can still access the older version.
I searched around SO and other resources but i couldnt find a definite answer how to deploy new version without messing up the old version.
Anything in right direction will be helpful.
In one of the projects I was working on, we built in a versioning scheme on top of our Web API. We used this tutorial to get started. I would recommend starting there.
Sorry for the generic answer, if you post some more specifics I will make some updates.
I'd suggest to deploy separate cloud service and use "v2.server.mydomain.com"