Running docker-compose commands inside IBM Bluemix container - ibm-cloud

I am trying to execute "docker-compose scale" command from within one of the containers on my IBM Bluemix containers. I wonder if the API of IBM containers can be used for that purpose (http://ccsapi-doc.mybluemix.net/)

This should be possible, but there are some caveats. Mainly - docker/compose running inside a container in general expects to have access to the docker socket, which it will not in this case (that requires privileged access to the host). To get around that, you'll need to install docker + compose + bx/cf + ic plugin and use those to login to Bluemix from inside the container, then use cf login + cf ic login (or bx...), then export the variables as displayed there.
Once that's done, then it will be able to access the api server to make the docker/compose calls that way.

Related

starting Mock Server automatically when running restbird docker container

i want to use the "Mock Server" feature provided with https://restbird.org/ .
When starting restbird via the provided docker image, it listens on localhost:8080 by default.
Unfortunately the configured "Mock Server" instances still need to be started via the Web Frontend as it is described in the documentation here:
https://restbird.org/docs/mock-run.html#run-mock-server
Is there a way to automatically start the "Mock Server" instances when running the docker image without logging in into the backend (via admin/admin) and clicking the "start" button ?
Reason is, i want to start the mock server inside a gitlab pipeline where i have no further interaction possibilities after the container has been started.
I can not find anything like this in the documentation.
Thanks a lot for any clues - or if it is not possible i cam make a feature request.
I have found the solution myself. One can start a specific Mock Server as described in:
https://restbird.org/docs/API-MockServer.html#start-specific-mockserver-case
This can be scripted into my pipeline after executing the docker run command.

How to connect to Dataproc jupyternotebook in VPC-SC

Without VPC-SC, I can connect to Jupyter lab on dataproc. However when the VPC-SC is turned on, then the web component is disabled. I am aware of the VPC-SC limitation, however, what is the work around solution?
VPC-SC is designed to not permit data transfer across the security perimeter and this is essentially what any workaround would be.
VPC-SC is not currently supported but we are working on providing first-class support.
The workaround is to use an SSH tunnel to connect to the UI.
Since you mentioned accessing the UI without VPC-SC, I assume that you installed the Jupyter component and checked the "enable component gateway" button when you created your cluster.
If that is the case, then the port you want to connect to on the master node is 8443 and the URL path you want to use is "/jupyter/lab/".
So, for example, if you use the "GCLOUD COMMAND" options on the page I linked to, then the URL you would use after setting it up would be http://<your-cluster-name>-m:8443/jupyter/lab

api connect local cloud 'apic edit' error: it appears the Docker for Windows has not been installed

I have installed the apic editor in Windows 10. I often get a 'Building v5 Gateway' error pop-up, when starting the the local server on the Assemble tab. The message says: "Error: It appears that Docker for Windows has not been installed..."
That's true - I'm not using it and don't want to use it! I did find a suggested fix (which I can't find now), which said: "set NO_PROXY=127.0.0.1".
This has appeared to work sometimes, but now it doesn't. It has worked when I set that variable in one command and followed it by 'apic edit'. I have since realised that you can chain 'set' cmds using && before 'apic edit'. I hoped that chaining NO_PROXY and 'apic'edit' would do the trick... but it didn't.
What kind of works, is to issue 'apic start' in a separate window. I end up with a running 'node.exe' window, and a failed 'Node.js' window... because it can't find an 'env.yaml' file (I've tracked down that this is because I started it outside the Designer). Which means I can test the api call.
I expect to be helping a customer to get started with APIC and this behaviour is not going to impress them. How can I get 'normal' service to be resumed?
Regards, John
Try to install docker for windows. Here is the link : Install Docker and restart your computer.
Solution
Make sure any gateway instances are stopped
apic stop
Start the API Designer
apic edit
Within the API Designer, select your API and go to Assemble
Make sure the policy palette panel in the left isn't collapsed. If it is, click the right arrow button ()
At the top of the policy palette panel, click the filter policies button ()
Make sure you select Micro Gateway policies
Click save ()
Now click the play button () in the lower left to start the micro gateway
Alternatively, edit the Swagger yaml file for the API and make sure the micro gateway is configured:
x-ibm-configuration:
gateway: micro-gateway
Note that if you've added any DataPower Gateway policies to your API, they will be disabled when running the micro gateway.
Details
API Connect requires a gateway to work. There are two different gateways:
The micro gateway, which is open-source but is much more limited. When running this in conjunction with the API Designer, the micro gateway will run as a Node.js app directly on your local machine.
The DataPower Gateway, which is a commercial product but can be used for free for development purposes. When running this with the API Designer, it will run as a container in Docker based on this image.
What kind of works, is to issue 'apic start' in a separate window.
By running apic start, you've manually started the micro gateway:
$ apic start
Service apic-gw started on port 4001.
$ apic services
Service apic-gw running on port 4001.
$ ps -eo command | grep gateway
/home/user/.nvm/versions/node/v6.14.4/bin/node /home/user/.nvm/versions/node/v6.14.4/lib/node_modules/apiconnect/node_modules/microgateway/datastore/server/server.js
The better way to start the gateway is from within the API Desginer by clicking the start button in the lower left. This will start the appropriate gateway for your API. If you see the "Building v5 Gateway" message, you've started the DataPower Gateway.

How use posgres or mongo databases in pcf-dev(pivotal cloudfoundry dev)?

dev (replacement of micro cloud foundry) I saw 3 services in marketplace mysql, redis and Rabbit, buy I need use mongo and postgres for my stuff, there is any easy way to add it in this deployment?
PCF Dev does not currently include support for MongoDB or Postgres service instances. It is also not currently possible to install tiles or BOSH releases.
All of these things may be supported eventually, but for now, you can run MongoDB or Postgres on your host system and create a user-provided service instance using the cf CLI.
Here's an example for Postgres: https://docs.tibco.com/pub/bwcf/1.0.0/doc/html/GUID-D7408016-8C7B-4637-BCC5-EDD9D5C52267.html
Note that you must use host.pcfdev.io instead of localhost to refer to the host system (instead of the PCF Dev VM). In the example above, your URL might look like:
url> postgresql://host.pcfdev.io:5432/postgres
(Also note that host.pcfdev.io may actually be host2.pcfdev.io if your system domain is local2.pcfdev.io instead of local.pcfdev.io)
~Stephen Levine, PCF Dev Product Manager

Docker : multiples linked containers for each customers

I'm developing a platform that monitor emails, save the results in a Mongo database (through Parse-Server) and display the results on a web app (using AngularJS).
Basically, for each customer i want a SMTP server, a Parse Server, a MongoDB & a web platform.
I thought of using Docker for more scalability and the idea is to automatically create the containers when the user signup on my website but I don't really understand how to link these containers together : web1|smtp1 connected to parse1 connected to mongo1 & web2|smtp2 connected to parse2 connected to mongo2.
In the end, i want the customers to access the web app through web1.website.com, so I think i should also use Haproxy..
I'm just wondering if it's really the best way to do it as i'm going crazy with the automation process and if you have any tips to do that.
Using Docker (specifically Docker Compose) linking containers together is very easy. In fact it happens out of the box! If you compose all your containers at the same time, a Docker network is created for your "composed app" and all containers can see each other.
See the documentation here.
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.