Can't Create Receive Connector - powershell

So I'm trying to copy a single receive connector from one server to another. I store the receive connector in a variable like so:
$NKCRC = Get-ReceiveConnector "<Source Resource Connector>" | Select *
Then I try to create the new one with the command below:
New-ReceiveConnector -Server $targetServer -Bindings $NKCRC.Bindings -RemoteIPRanges $NKCRC.RemoteIPRanges -Name $NKCRC.Name -TransportRole FrontEndTransport
The source RC is an Exchange 2010 RC and I'm creating it on an Exchange 2016 server. When I run the command, I get the following error:
The values that you specified for the Bindings and RemoteIPRanges parameters
conflict with the settings on Receive connector "<Identity of receive connector on the
target server>" SMTP Relay". A Receive connector must have a unique combination of a local
IP address & port bindings and remote IP address ranges. Change at least one of these
values.
I checked the mentioned RC and found they don't have the exact same settings for ANYTHING AT ALL, Bindings or RemoteIPRanges. I'm kinda at a loss. The bindings and RemoteIPRanges for each RC is below:
Source RC Bindings: {0.0.0.0:25}
Count of Source RC RemoteIPRanges: 60 (Rather than post them all, lol)
RC that the Exchange thinks is a duplicate
Bindings: {0.0.0.0:25, 10.200.154.79:25}
Count of Source RC RemoteIPRanges: 50
So obviously they aren't duplicates, right? What am I missing?

Related

Connection to external Kafka Server using confluent-kafka-dotnet fails

I need to read Kafka messages with .Net from an external server. As the first step, I have installed Kafka on my local machine and then wrote the .Net code. It worked as wanted. Then, I moved to the cloud but the code did not work. Here is the setup that I have.
I have a Kafka Server deployed on a Windows VM (VM1: 10.0.0.4) on Azure. It is up and running. I have created a test topic and produced some messages with cmd. To test that everything is working I have opened a consumer with cmd and received the generated messages.
Then I have deployed another Windows VM (VM2, 10.0.0.5) with Visual Studio. Both of the VMs are deployed on the same virtual network so that I do not have to worry about opening ports or any other network configuration.
then, I have copied my Visual Studio project code and then changed the IP address of the bootstrap-server to point to the Kafka server. It did not work then, I read that I have to change the server configuration of Kafka, so I opened the server.properties and modified the listeners property to listeners=PLAINTEXT://10.0.0.4:9092. It still does not work.
I have searched online and tried many of the tips but it does not work. I think first of all to provide the credential to an external server (vm1), and probably some other configuration. Unfortunately, the official documentation of confluent is very short with very few examples. There is also no example to my case on the official GitHub. I have played with the "Sasl" properties in the Consumer Config class, but also no success.
the error message is:
%3|1622220986.498|FAIL|rdkafka#consumer-1| [thrd:10.0.0.4:9092/bootstrap]: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 10.0.0.4:9092/bootstrap: Connect to ipv4#10.0.0.4:9092 failed: Unknown error (after 21038ms in state CONNECT)
Error: 1/1 brokers are down
Here is my .Net core code:
static void Main(string[] args)
{
string topic = "AzureTopic";
var config = new ConsumerConfig
{
BootstrapServers = "10.0.0.4:9092",
GroupId = "test",
//SecurityProtocol = SecurityProtocol.SaslPlaintext,
//SaslMechanism = SaslMechanism.Plain,
//SaslUsername = "[User]",
//SaslPassword = "[Password]",
AutoOffsetReset = AutoOffsetReset.Latest,
//EnableAutoCommit = false
};
int x = 0;
using (var consumer = new ConsumerBuilder<Ignore, string>(config)
.SetErrorHandler((_, e) => Console.WriteLine($"Error: {e.Reason}"))
.Build())
{
consumer.Subscribe(topic);
var cancelToken = new CancellationTokenSource();
while (true)
{
// some tasks
}
consumer.Close();
If you set listeners to a hard-coded IP, it'll only start the server binding and accepting traffic to that ip
And your listener isn't defined as SASL, so I'm not sure why you've tried using that in the client. While using credentials is strongly encouraged when sending data to cloud resources, it's not required to fix a network connectivity problem. You definitely shouldn't send credentials over plaintext, however
Start with these settings
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://10.0.0.4:9092
That alone should work within the VM shared network. You can use the console tools included with Kafka to test it.
And if that still doesn't work from your local client, then it's because 10.0.0.0/8 address space is considered a private network and you must advertise the VM's public IP and allow TCP traffic on port 9092 through Azure Firewall. It'd also make sense to expose multiple listeners for internal Azure network and external, forwarded network traffic
Details here discuss AWS and Docker, but the basics still apply
Overall, I think it'd be easier to setup Azure EventHub with Kafka support

Configuring FQDN for GCE instance on startup

I am trying to start a google compute engine (GCE) instance with a pre-configured FQDN. We are intending to run an application that is licensed based on the contents of /etc/hosts.
I am starting the instances using the Google Cloud SDK utility - gcloud.
I have tried setting the "hostname" key using the metadata option like so:
gcloud compute instances create mynode (standard opts) --metadata hostname=mynode.example.com
Whenever I log into the developer console, under computer, instances, I can see hostname under "Custom metadata". This appears to be a new, custome key - it has no impact on what:
http://metadata.google.internal/computeMetadata/v1/instance/hostname
returns.
I have also tried setting "instance/hostname" like the below, which causes a parsing error when using gcloud.
--metadata instance/hostname=mynode.example.com
I have successfully used the startup scripts functionality of the metadata server to run a startup script that parses the new, internal IP address of the newly created instance, updated /etc/hosts. This appears to work but doesn't feel "like the google way".
Can I configure the FQDN (specifically, a domain name, as the instance name is always the hostname) of an instance, during instance creation, using the metaserver functionality?
try this:
Go to your GCE >> VM instances panel.
stop your gce instance.
clic on the instance name.
Edit your instance, adding this values on Custom metadata fields:
Key field: hostname / Value field: your.server.hostname
Key field: startup-script / Value field: sudo -s hostnamectl set-hostname your.server.hostname
setup-example-image.png
Finally, start your instance and test with a hostnamectl command.
regards!
According to this article 'hostname' is part of the default metadata entries that provide information about your instance and it is NOT possible to manually edit any of the default metadata pairs. You can also take a look at this video from the Google Team. Within the first few minutes it is mentioned that you cannot modify default metadata pairs. As such, it does not seem like you can specify the hostname upon instance creation other than through the use of a start-up script like you've done already. It is also worth mentioning that the hostname you've specified will get deleted and auto-synced by the metadata server upon reboot unless you're using a start-up script or something that would modify it every time.
If what you're currently doing works for what you're trying to accomplish, it might be the only workaround to your scenario.
Here is a patch for /usr/share/google/set-hostname to set FQDN to GCE instance.
https://gist.github.com/yuki-takeichi/3080521322f0f1d159ea6a343e2323e6
Before you use this patch, you must set your desired FQDN in your instance's metadata by specifying hostname key.
Hostname is set each time instance's IP address is renewed by dhclient. set-hostname is just a hook script which dhclient executes and serves new IP address and internal hostame to, and modifies /etc/hosts. This patch changes the source of hostname by querying instance's metadata from metadata server.
The original set-hostname script is here:
https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google_config/bin/set_hostname.
Use this patch at your own risk.
When creating a VM, you can specify a custom FQDN hostname as an optional parameter. This feature is currently in Beta.
$ gcloud beta compute instances create INSTANCE_NAME --hostname example.hostname
This should work across OSes, and eliminate the need for workaround scripts.
More info in the docs.
-- Sirui (Product Manager, Google Compute Engine)
I've looked throughout this site to find answered questions and found a few things that work but with a couple solutions combined. This thread seems the place to answer.
1) echo example.com > /etc/hostname
2) add -- 127.0.1.1 example.com in /etc/hosts
3) add -- hostnamectl set-hostname
example.com -- command to /etc/rc.local script
4) uncomment /etc/dhcp/dhclient.conf line:
supersede domain-name "example.com";
5) profit.... Seems to stick after each reboot
(Note example.com is your domain name: fqdndomain.com - yourfqdndomain.org)
Also note this is for Ubuntu or Debian. Other Unix May slightly vary. I've tested this on Ubuntu 16.04
Always on the wording NOT possible to manually edit any of the default metadata pairs, how about the instant level default metadata "/scheduling"? we could set them manually as mentioned in this article

Is there a api for ganglia?

Hello I would like to enquire if there is an API that can be used to retrieve Ganglia stats for all clients from a single ganglia server?
The Ganglia gmetad component listens on ports 8651 and 8652 by default and replies with XML metric data. The XML data type definition can be seen on GitHub here.
Gmetad needs to be configured to allow XML replies to be sent to specific hosts or all hosts. By default only localhost is allowed. This can be changed in /etc/ganglia/gmetad.conf.
Connecting to port 8651 will get you a default XML report of all metrics as a response.
Port 8652 is the interactive port which allows for customized queries. Gmetad will recognize raw text queries sent to this port, i.e. not HTTP requests.
Here are examples of some queries:
/?filter=summary (returns a summary of the whole grid, i.e. all clusters)
/clusterName (returns raw data of a cluster called "clusterName")
/clusterName/hostName (returns raw data for host "hostName" in cluster "clusterName")
/clusterName?filter=summary (returns a summary of only cluster "clusterName")
The ?filter=summary parameter changes the output to contain the sum of each metric value over all hosts. The number of hosts is also provided for each metric so that the mean value may be calculated.
Yes, there's an API for Ganglia: https://github.com/guardian/ganglia-api
You should check this presentation from 2012 Velocity Europe - it was really a great talk: http://www.guardian.co.uk/info/developer-blog/2012/oct/04/winning-the-metrics-battle
There is also an API you can install from pypi with 'pip install gangliarest' and sets up a configurable API backed with a Redis cache and indexer to improve performance.
https://pypi.python.org/pypi/gangliarest

Using MSMQ with a Hosts File entry

I'm trying to use an alias in the hosts file to point to a server containing an MSMQ. If I specify the actual server name in the MSMQ path then everything works fine:
var queue = new MessageQueue("FormatName:DIRECT=OS:queue-server\Private$\some-queue");
var enumerator = queue.GetMessageEnumerator2();
while (enumerator.MoveToNextRecord())
{
// Do something
}
However if I create the following hosts file entry:
XXX.XXX.XXX.XXX queue-server-alias #queue-server
Then reference the queue using the alias:
var queue = new MessageQueue("FormatName:DIRECT=OS:queue-server-alias\Private$\some-queue");
Then I get the following error:
The queue does not exist or you do not have sufficient permissions to perform the operation.
The hosts file entry is correct and I can ping the alias and it returns the correct IP address. I've read through the following article detailing the various MSMQ path formats but none of them seem to resolve the issue:
Difference between Path name and Format name when accessing MSMQ queues.
Any ideas?
Open your registry, make sure
HKEY_LOCAL_MACHINE\Software\Microsoft\MSMQ\Parameters\IgnoreOSNameValidation
is set to 1 (DWORD value)
This will mean that msmq will not validate the destination queue before trying to send the message.
(from John Breakwell's post here)

Can I open a clustered MQ queue for writing in Perl?

If I have a Websphere MQ queue defined on another queue manager in the cluster, is there a way I can open it for writing using the Perl interface? The code below brings back mqrc 2085.
$messageQ = MQSeries::Queue->new
(
QueueManager => $qMgr,
Queue => $queue,
Options => $openOpt
) or die ">>>ERROR2: Unable to open the queue: $queue\n";
}
Yes! The Perl modules are a thin veneer over the WMQ API and expose all the basic options and most of the really esoteric stuff as well.
When you open a queue, WebSphere MQ performs name resolution on the values you provide for Queue and QMgr names. If you provide both a Queue and a QMgr name then the object reference is fully qualified and WMQ will attempt to open it as named. So if the name you provide is the local QMgr and the clustered queue does not have a locally defined instance, the open will fail with a 2085 Unknown Object Name.
The trick to opening a clustered queue is to provide a null value for the QMgr name. This causes name resolution to check the local QMgr for a queue of the same name, then finding none it checks the cluster repository and resolve the open to the clustered queue. Note that the queue must be advertised to the cluster for this to work. Specifically, the CLUSTER or CLUSNL attribute of the target queue must be non-blank and refer to a cluster that the source QMgr participates in. Similarly, the destination QMgr must also participate in the same cluster as the source QMgr.
Note also that if you specify a QMgr name on the open that is not the local QMgr, then WMQ will attempt to resolve the QMgr name only. If it can resolve a route to that QMgr then it will send the message there. This means that in a cluster you can send a message to any queue on any QMgr so long as you know the fully-qualified name.
Finally, you can define a local alias over a clustered queue. For example if you are on QMGRA and DEF QA(TARGET.QUEUE) TARGQ(TARGET.QUEUE) and then on QMGRB and QMGRC in the same cluster you DEF QL(TARGET.QUEUE) CLUSTER(MYCLUS) then it is possible to open QMGR=QMGRA QUEUE=TARGET.QUEUE and still have it work as expected. Note that the alias is NOT advertised to the cluster but the target queue is. The only issue with this approach is that the first time it is opened the API call may fail if the cluster query takes too long. When I do this in Production, I always use amqsput on the alias ahead of time to make the QMgr query the repository before the actual application opens the queue. Why would you do this? If security is a concern you probably don't want to authorize all apps directly to the cluster XMitQ because, as noted above, they could then put a message onto any queue on any QMgr in the cluster, including SYSTEM.ADMIN.COMMAND.QUEUE. The alias gives you a place to hang authorizations and restrict the user to specific destinations in the cluster.
So short answer, make sure you provide a null QMgr name on the Open call or set up a local alias over the clustered queue. For more about the security aspects of this, see the WMQ Security presentation at http://t-rob.net/links