Connect NS3 node to a local blockchain - simulation

Imagine that you have a local bolckchain network where each node is a container created in the same OS. Are there any way to connect each node with an other node created in a NS3 simulation?

it could be possible through the emulation framework through linux containers here,here

Related

Use laptop as a worker node with separate master node

my problem is the same: i have an extra laptop and want use it like a kubernetes worker node, instead having the master node and worker node in the same machine(like minikube).
the machine with the master node and the laptop with worker node are on the same LAN.
but i have no idea about the technology which i have to use (openshift or something else)
thanks for all
If you opt for creating Highly Available Clusters on bare metal, just follow official kubernetes.io's tutorial: Creating Highly Available Clusters with kubeadm, take option: with stacked control plane nodes.
The easiest for this purpose will be kubeadm: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/. You basically doing kubeadm init on master, and kubeadm join on every worker.

Production ready Kubernetes cluster on Linux VM

We are running all our applications in Linux VM's and tried Kubernetes cluster on local Mac using minikube and it looks promising.
Interested in setting up Kubernetes on Linux VM's, but:
Is is possible to setup production ready cluster on Linux VM's?
As shown in kubernetes/kubeadm issue 465, setting up a cluster using VMs can be a challenge.
Using Calico will help, since it provides secure network connectivity for containers and virtual machine workloads.
Use Calico 2.6.

Can I setup kubernetes cluster using kubeadm on ubuntu machines inside a office LAN

I was looking at this url.
It says-"If you already have a way to configure hosting resources, use kubeadm to easily bring up a cluster with a single command per machine."
What do you mean by "If you already have a way to configure hosting resources"?
If I have a few Ubuntu machines within my office LAN can I setup Kubernetes cluster on them using kubeadm?
It just means that you already have a way of installing an OS on these machines, booting them, assigning IPs on your LAN and so. If you can SSH into your nodes to be you are ready!
Follow the guide carefully and you will have a demo cluster in no time.

Kubernetes deployment using shared-disk FC HBA options

I have been looking at available Kubernetes storage add-ons and have been unable to put together something that would work with our setup. Current situation is several nodes each with an FC HBA controller connected to a single LUN. I realize that some sort of cluster FS will need to be implemented, but once that is in place I don't see how I would then connect this to Kubernetes.
We've discussed taking what we have and making an iSCSI or NFS host but in addition to requiring another dedicated machine, we lose all the advantages of having the storage directly available on each node. Is there any way to make use of our current infrastructure?
Details:
4x Kubernetes nodes (1 master) deployed via kubeadm on Ubuntu 16.04 using flannel as the network addon, each system has the SAN available as block device (/dev/sdb)

how to set aws node and vagrant node when the master node in the local

the kubernetes 1.2 support muti-node acrossing multiple service providers , now the master node running in my laptop , I want to add two work node respectively in amazon and vagrant . how to achieve it?
the kubernetes 1.2 support muti-node acrossing multiple service providers
Where did you see this? It isn't actually true. In 1.2 we added support for nodes across multiple availability zones within the same region on the same service provider (e.g. us-central1-a and us-central1-b in the us-central1 region in GCP). But there is no support for running nodes across regions in the same service provider much less spanning a cluster across service providers.
now the master node running in my laptop , I want to add two work node respectively in amazon and vagrant
The worker nodes must be able to connect directly to the master node. I wouldn't suggest exposing your laptop to the internet directly so that it can be reached from an Amazon data center, but would instead advise you to run the master node in the cloud.
Also note that if you are running nodes in the same cluster across multiple environments (AWS, GCP, Vagrant, bare metal, etc) then you are going to have a difficult time getting networking configured properly so that all pods can reach each other.