what's the difference between "external" and "internal" about ethtool transceiver's type? - ubuntu-16.04

there're two ubuntu 16.04 machines,the one ethtool transceiver's type is :internal,the other one shows external. Two machines ping status are fine.
I find ethtool man docs explain it :
"nternal|external
Selects transceiver type. Currently only internal
and external can be specified, in the future fur‐
ther types might be added."
but i can't understand it ,so , what's the difference between "external" and "internal" about ethtool transceiver's type?
ethtool transceiver's type

Related

Ethstat / Interface Grafana CollectD not showing correct value (MB/s)

I use Grafana with CollectD (and Graphite) to monitor my network usage on my server.
I use the 'Interface' Plugin of CollectD and display the graphs like this:
alias(scale(nonNegativeDerivative(collectd.graph_host.interface-eth0.if_octets.rx), 0.00000095367431640625), 'download')
When I now initiate a downlad with a speedlimit. The download runs for approx 10 minutes, but only this is shown (green line is the download). So it only shows a peak.
Do I have to use some other metrics? I also tried the 'ethstat' but that has so many options none of which I understand!
Is there any beginners documentation. I only found the CollectD Docs, which I read but that does not say anything what the metrics of the ethstat actually mean.
No, there isn't any beginner documentation about the ethstats metrics meaning in collectd. This is because the ethstat plugin reports statistics collected by ethtool on your system and the ethtool stats are vendor specific.
To point you in the right direction, run ethtool -S eth0
That should show you names and numbers like what collectd is reporting.
Now run ethtool -i eth0 and find your driver info.
Then, google your driver name and find out what statistics your card reports and what they mean. It may involve reading linux driver source code, but don't be too scared of that. What you want is probably in the comments, not the code.

Publishing metadata to Service Fabric

So, I have this idea I'm working on, where services on some nodes need to discover other services dynamically at runtime, based on metadata that they might publish. And I'm trying to figure out the best way to go about this.
Some of this metadata will be discovered from the local machine at runtime, but it then has to be published to the Fabric so that other services can make decisions on it.
I see the Extension stuff in the ServiceManifests. This is a good start. But it doesn't seem like you can alter or add extensions at runtime. That would be nice!
Imagine my use case. I have a lot of machines on a Fabric, with a lot of services deployed to them. What I'm advertising is the audio codecs that a given machine might support. Some nodes have DirectShow. So, they would publish the local codecs available. Some machines are running 32 bit services, and publish the 32 bit DirectShow codecs they have (this is actually what I need, since I have some proprietary ACM codecs that only run in 32 bit). Some machines are Linux machines, and want to make available their GStreamer codecs.
Each of these needs to publish the associated metadata about what they can do, so that other services can string together from that metadata a graph about how to process a given media file.
And then each will nicely report their health and load information, so the fabric can determine how to scale.
Each of these services would support the same IService interface, but each would only be used by clients that decided to use them based on the published metadata.
In Service Fabric the way to think about this kind of problem is from a service point of view, rather than a machine point of view. In other words, what does each service in the cluster support, rather than what does each machine support. This way you can use a lot of Service Fabric's built-in service discovery and querying stuff, because the abstraction the platform provides is really about services more than it is about machines.
One way you can do this is with placement constraints and service instances representing each codec that the cluster supports as a whole. What that means is that you'll have an instance of a service representing a codec that only runs on machines that support that codec. Here's a simplified example:
Let's say I have a Service Type called "AudioProcessor" which does some audio processing using whatever codec is available.
And let's I have 5 nodes in the cluster, where each node supports one of codecs A, B, C, D, and E. I will mark each node with a node property corresponding to the codec it supports (a node property can just be any string I want). Note this assumes I, the owner of the cluster, know the codecs supported by each machine.
Now I can create 5 instances of the AudioProcessor Service Type, one for each codec. Because each instance gets a unique service name that is in URI format, I can create a hierarchy with the codec names in it for discovery through Service Fabric's built-in Naming Service and querying tools, e.g., "fabric:/AudioApp/Processor/A" for codec A. Then I use a placement constraint for each instance that corresponds to the node property I set on each node to ensure the codec represented by the service instance is available on the node.
Here's what all this looks like when everything is deployed:
Node 1 - Codec: A Instance: fabric/AudioApp/Processor/A
Node 2 - Codec: B Instance: fabric/AudioApp/Processor/B
Node 3 - Codec: C Instance: fabric/AudioApp/Processor/C
Node 4 - Codec: D Instance: fabric/AudioApp/Processor/D
Node 5 - Codec: E Instance: fabric/AudioApp/Processor/E
So now I can do things like:
Find all the codecs the cluster supports by querying for a list of AudioProcessor service instances and examining their names (similar to getting a list of URIs in an HTTP API).
Send a processing request to the service that supports codec B by resolving fabric:/AudioApp/AudioProcessor/B
Scale out processing capacity of codec C by adding more machines that support codec C - Service Fabric will automatically put a new "C" AudioProcessor instance on the new node.
Add machines that support multiple codecs. Using multiple node properties on it, Service Fabric will automatically place the correct service instances on it.
The way a consumer thinks about this application now is along the lines of "is there a service that support codec E?" or "I need to talk to service A, C, and D to process this file because they have the codecs I need."

OpenStack API Implementations

I have spent the last 6 hours reading through buzzword-riddled, lofty, high-level documents/blogs/articles/slideshares, trying to wrap my head around what OpenStack is, exactly. I understand that:
OpenStack is a free and open-source cloud computing software platform. Users primarily deploy it as an infrastructure as a service (IaaS) solution.
But again, that's a very lofty, high-level, gloss-over-the-details summary that doesn't really have meaning to me as an engineer.
I think I get the basic concept, but would like to bounce my understanding off of SO, and additionally I am having a tough time seeing the "forest through the trees" on the subject of OpenStack's componentry.
My understanding is that OpenStack:
Installs as an executable application on 1+ virtual machines (guest VMs); and
Somehow, all instances of your OpenStack cluster know about each other (that is, all instances running on all VMs you just installed them on) and form a collective pool of resources; and
Each OpenStack instance (again, running inside its own VM) houses the dashboard app ("Horizon") as well as 10 or so other components/modules (Nova, Cinder, Glance, etc.); and
Nova, is the OpenStack component/module that CRUDs VMs/nodes for your tenants, is somehow capable of turning the guest VM that it is running inside of into its own hypervisor, and spin up 1+ VMs inside of it (hence you have a VM inside of a VM) for any particular tenant
So please, if anything I have stated about OpenStack so far is incorrect, please begin by correcting me!
Assuming I am more or less correct, my understanding of the various OpenStack components is that they are really just APIs and require the open source community to provide concrete implementations:
Nova (VM manager)
Keystone (auth provider)
Neutron (networking manager)
Cinder (block storage manager)
etc...
Above, I believe all components are APIs. But these APIs have to have implementations that make sense for the OpenStack deployer/maintainer. So I would imagine that there are, say, multiple Neutron API providers, multipe Nova API providers, etc. However, after reviewing all of the official documentation this morning, I can find no such providers for these APIs. This leaves a sick feeling in my stomach like I am fundamentally mis-understanding OpenStack's componentry. Can someone help connect the dots for me?
Not quite.
Installs as an executable application on 1+ virtual machines (guest VMs); and
OpenStack isn't a single executable, there are many different modules, some required and some optional. You can install OpenStack on a VM (see DevStack, a distro that is friendly to VMs) but that is not the intended usage for production, you would only do that for testing or evaluation purposes.
When you are doing it for real, you install OpenStack on a cluster of physical machines. The OpenStack Install Guide recommends the following minimal structure for your cloud:
A controller node, running the core services
A network node, running the networking service
One or more compute nodes, where instances are created
Zero or more object and/or block storage nodes
But note that this is a minimal structure. For a more robust install you would have more than one controller and network nodes.
Somehow, all instances of your OpenStack cluster know about each other (that is, all instances running on all VMs you just installed them on) and form a collective pool of resources;
The OpenStack nodes (be them VMs or physical machines, it does not make a difference at this point) talk among themselves. Through configuration they all know how to reach the others.
Each OpenStack instance (again, running inside its own VM) houses the dashboard app ("Horizon") as well as 10 or so other components/modules (Nova, Cinder, Glance, etc.); and
No. In OpenStack jargon, the term "instance" is associated with the virtual machines that are created in the compute nodes. Here you meant "controller node", which does include the core services and the dashboard. And once again, these do not necessarily run on VMs.
Nova, is the OpenStack component/module that CRUDs VMs/nodes for your tenants, is somehow capable of turning the guest VM that it is running inside of into its own hypervisor, and spin up 1+ VMs inside of it (hence you have a VM inside of a VM) for any particular tenant
I think this is easier to understand if you forget about the "guest VM". In a production environment OpenStack would be installed on physical machines. The compute nodes are beefy machines that can host many VMs. The nova-compute service runs on these nodes and interfaces to a hypervisor, such as KVM, to allocate virtual machines, which OpenStack calls "instances".
If your compute nodes are hosted on VMs instead of on physical machines things work pretty much in the same way. In this setup typically the hypervisor is QEMU, which can be installed in a VM, and then can create VMs inside the VM just fine, though there is a big performance hit when compared to running the compute nodes on physical hardware.
Assuming I am more or less correct, my understanding of the various OpenStack components is that they are really just APIs
No. These services expose themselves as APIs, but that is not all they are. The APIs are also implemented.
and require the open source community to provide concrete implementations
Most services need to interface with an external service. Nova needs to talk to a hypervisor, neutron to interfaces, bridges, gateways, etc., cinder and swift to storage providers, and so on. This is really a small part of what an OpenStack service does, there is a lot more built on top that is independent of the low level external service. The OpenStack services include the support for the most common external services, and of course anybody who is interested can implement more of these.
Above, I believe all components are APIs. But these APIs have to have implementations that make sense for the OpenStack deployer/maintainer. So I would imagine that there are, say, multiple Neutron API providers, multipe Nova API providers, etc.
No. There is one Nova API implementation, and one Neutron API implementation. Based on configuration you tell each of these services how to interface with lower level services such as the hypervisor the networking stack, etc. And as I said above, support for a range of these is already implemented, so if you are using with ordinary x86 hardware for your nodes, then you should be fine.

Couchbase XDCR on Openstack

Having received no replies on the Couchbase forum after nearly 2 months, I'm bringing this question to a broader audience.
I'm configuring CB Server 2.2.0 XDCR between two different Openstack (Essex, eek) installations. I've done some reading on using a DNS FQDN trick in the couchbase-server file to add a -name ns_1#(hostname) value in the start() function. I've tried that with absolutely zero success. There's already a flag in the start() function that says -name 'babysitter_of_ns_1#127.0.0.1' so I don't know if I need to replace that line, comment it out, or keep it. I've tried all 3 of those; none of them seemed to have any positive effect.
The FQDNs are pointing to the Openstack floating_ip addresses (in amazon-speak, the "public" ones). Should they be pointed to the fixed_ip addresses (amazon: private/local) for the nodes? Between Openstack installations, I'm not convinced pointing to an unreachable (potentially duplicate) class-C private IP is of any use.
When I create a remote cluster reference using the floating_ip address to a node in the other cluster, of course it'll create the cluster reference just fine. But when I create a Replication using that reference, I always get one of two distinct errors: Save request failed because of timeout or Failed to grab remote bucket 'bucket' from any of known nodes.
What I think is happening is that the Openstack floating_ip isn't being recognized or translated to its fixed_ip address prior to surfing the cluster nodes for the bucket. I know the -name ns_1#(hostname) modification is supposed to fix this, but I wonder if anyone has had success configuring XDCR between Openstack installations that may be able to provide some tips or hacks.
I know this "works" in AWS. It's my belief that AWS uses some custom DNS enabling queries to return an instance's fixed_ip ("private" IP) when going between availability zones, possibly between regions. There may be other special sauce in AWS that makes this work.
This blog post on aws Couchbase XDCR replication should help! There are quite a few steps so I won't paste them all here.
http://blog.couchbase.com/cross-data-center-replication-step-step-guide-amazon-aws

JBoss multiple instances of a server, multiple ports in production environment not recommended?

The following document says:
This is easier to do and does not require a sysadmin. However, it is not the preferred approach for production systems for the reasons listed above. This approach is usually used in development to try out clustering behavior.
What are risks with this approach in the production environment? In weblogic, it is pretty common, and seen few production environments running with multiple ports(managed servers).
https://community.jboss.org/wiki/ConfiguringMultipleJBossInstancesOnOnemachine
The wiki clearly answers that question. Here is the text from the wiki for your reference
Where possible, it is advised to use a different ip address for each instance of JBoss rather than changing the ports or using the Service Binding Manager for the following reasons:
When you have a port conflict, it makes it very difficult to troubleshoot, given a large amount of ports and app servers.
Too many ports makes firewall rules too difficult to maintain.
Isolating the IP addresses gives you a guarantee that no other app server will be using the ports.
Each upgrade requires that you go in and re set the binding manager again. Most upgrades will upgrade the conf/jboss-service.xml file, which has the Service binding manager configuration in it.
The configuration is much simpler. When defining new ports(either through the Service Binding manager or by going in and changing all the ports in the configuration), it's always a headache trying to figure out which ports aren't taken already. If you use a NIC per JBoss Instance, all you have to change is the Ip address binding argument when executing the run.sh or run.bat. (-b )
Once you get 3 or 4 applications using different ports, the chances really increase that you will step on another one of your applications ports. It just gets more difficult to keep ports from conflicting.
JGroups will pick random ports within a cluster to communicate. Sometimes when clustering, if you are using the same ip address, two random ports may get picked in two different app servers(using the binding manager) that conflict. You can configure around this, but it's better not to run into this situation at all.
On a whole, having an individual IP addresses for each instance of an app server causes fewer problems (some of those problems are mentioned here, some aren't).