Integrate RHEV 6.5 with OpenStack Juno - virtualization

Can I use RHEV hypervisor to manage and deploy VMs in openstack environment? I have 4 node setup with 2 compute nodes which one connected to vCenter and Can I configure other node to connect to RHEV.?

as far as I know RHEV integration with OpenStack is limited to networks, so no VM deployment is possible at the moment. You can use oVirt only for Neutron network management. You can import/create/delete networks on Neutron.
for more details see:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.4/html/Administration_Guide/chap-External_Providers.html

Related

How to add Windows node while creating cluster using Kubernetes on Google cloud platform?

I have tried creating Kubernetes cluster but all the nodes are linux based OS(Container-Optimized OS (cos) (default) and Ubuntu). I have windows based image stored on docker Hub I need to deploy this app to kubernetes cluster. I am using https://console.cloud.google.com/kubernetes/ to create cluster.
While creating nodes, in setting there are only two options: Container-Optimized OS (cos) (default) and Ubuntu.
Windows is not supported by Google Kubernetes. There is a feature request that you can track: Feature request : Support for Windows Server Containers in GKE
You can launch your own Google Compute VM and run Windows containers. This article provides more information.
I don't think you can run Windows nodes in GKE, even though Kubernetes itself supports Windows nodes (https://kubernetes.io/docs/getting-started-guides/windows/).
In my opinion, the other options you have are:
Run an on-prem Kubernetes cluster with your Windows licenses (the control plane would still run with Linux, only the nodes would be Windows based)
Use GCE instead of GKE to run your containers: https://cloud.google.com/compute/docs/containers/ and https://cloud.google.com/blog/products/gcp/how-to-run-windows-containers-on-compute-engine
Hope that helps!

Hyperledger Fabric version 1.2 multiple hosts deployment

I've seen a few tutorials online that describes steps for Hyperledger Fabric network multiple hosts deployment. All of the tutorials suggest to use docker swarm for the network deployment.
I wanted to know if using docker swarm is the Hyperledger Fabric multiple hosts deployment standard. Also, I wanted to know if there's a deployment standard for Hyperledger Fabric version 1.2 .
Hyperledger Cello is a blockchain provision and operation system, which helps manage blockchain networks in an efficient way.
You can have a look at this : https://github.com/hyperledger/cello

Openstack Heat and Kubernetes Deployment Integration

I want to create an Openstack cluster with HEAT and deploy kubernetes on it, how to integrate HEAT with Kubernetes, any solutions or suggestions?
I think that the magnum service is what you are looking for, since it allows you to create and manage not only kubernetes clusters but also other container orchestrator engines such as docker swarm or mesos, being fully integrated in OpenStack.
Indeed, in its core, magnum uses heat to deploy and configure the nodes of the cluster, so if you don't want to reinvent the wheel, give it a go: (Install guide).

Set up of Hyperledger fabric on 2 different PCs

I need to run Hyperledger-Fabric instances on 4 different machines PC-1 should contain CA and peers of ORG-1 in containers, Pc-2 should contain CA and peers of ORG-2, PC-3 should contain orderer(solo) and PC-4 should Node api Is my approach missing something ? if not how can I achieve this?
I would recommend that you look at the Ansible driver in Hyperledger Cello project to manage deployment across multiple hosts/vms.
In short, you need to establish network visibility across the set of host/vm nodes such that the peer knows about the orderer to which it will connect and so that gossip can operate. The Cello project does this for you with a set of driver options. The Ansible driver seems to have the most promise.
The Ansible driver can provision to a variety of cloud platforms including AWS, Azure, OpenStack and bare metal.

Can I setup kubernetes cluster using kubeadm on ubuntu machines inside a office LAN

I was looking at this url.
It says-"If you already have a way to configure hosting resources, use kubeadm to easily bring up a cluster with a single command per machine."
What do you mean by "If you already have a way to configure hosting resources"?
If I have a few Ubuntu machines within my office LAN can I setup Kubernetes cluster on them using kubeadm?
It just means that you already have a way of installing an OS on these machines, booting them, assigning IPs on your LAN and so. If you can SSH into your nodes to be you are ready!
Follow the guide carefully and you will have a demo cluster in no time.