Cannot connect to MongoDB Atlas Cluster: DNSHostNotFound - mongodb

I created a new cluster in MongoDB Atlas but I can't connect to it thru the mongo shell.
C:\git_symphony\esp8266\SymphonySocket>mongo "mongodb+srv://<clustername>-gy7bf.azure.mongodb.net/test" --username <USERNAME>
DNSHostNotFound: Failed to look up service "":No records found for given DNS query.
try 'mongo --help' for more information
I tried switching regions but it didn't work. I've also tried using Compass on my mac but it just loads indefinitely when I try to connect. What could possibly be wrong?

Turns out, my ISP blocks all connections to MongoDB for some reason. I haven't contacted them yet, but I find this very silly as I racked my brains out trying to solve this when the problem wasn't at all in my control.

I also had this problem with Comcast Xfinity. DHCP sets DNS servers that would not lookup the mongodb connections. I'm running KDE Neon Linux (Ubuntu 18.04). In order to get things working I had to supersede the domain-name-servers supplied through comcast. I used Google's public DNS, but there are others that can be used. I had to edit (you'll need root permissions) the /etc/dhcp/dhclient.conf file, and added to following line:
supersede domain-name-servers 8.8.8.8, 8.8.4.4;
I hope this helps somebody, took me too long to figure it out. :-)
I just found this post by M. Brandao with the fix for Windows users:
Open the Control Panel.
Click View network status and tasks
Click Change adapter settings on the left portion of the window.
Double-click the icon for the Internet connection you're using.
Click the Properties button.
Click and highlight Internet Protocol Version 4 (TCP/IPv4) and click
Properties.
If not already selected, select the Use the following DNS server
addresses option.
Enter the new DNS addresses (see above) and click OK and close out of all otherwindows.

Have you whitelisted your IP address ? IP whitelisting is important otherwise it will not connect.
Have you created this cluster recently and is it in the europe region ?
Is the cluster properly deployed or you are experiencing any issue in the deployment of the cluster ?

Related

Unable to add remote node in Rundeck 4.9.0

Following the doc from Rundeck, however the only button I have under "Sources tab" is "ResourceModelSource"
When I click that button I get a blank
PPS Issue happened on previous version - new to RunDeck, so I can't say that it EVER worked
I tried adding a manual resouces.xml in the project director y(Which I had to manually create, which tells me that's another issue) and reloading RD but that did not seem to work
While it's not the likely cause, I'll mention it here incase it IS relevant, I'm hosting on port 4440 however I'm using nginx to forward http (not https) requests on 443 to 4440, this is due to corp net sec policy.
I'm sure it's something where it's having an i/o issue on the local host, however I'm not seeing anything in the logs.
That is a known issue when you have Rundeck installed behind a proxy server, take a look at this: https://github.com/rundeck/rundeck/issues/6278 the solution is to set the grails.ServerURL (rundeck-config.properties file) with the exit URL defined for Rundeck in your proxy server (e.g: grails.serverURL=http://my_domain/rundeck), then restart the Rundeck service.

Whatsapp Business API production setup not working

I am trying to configure or setup the production environment of whatsapp business api as mentioned in the link https://developers.facebook.com/docs/whatsapp/installation/prod-single-instance
I have done everything mentioned in this my dockers are also running on port:9090 as can be seen in the image
still I can't access it. Whenever I try to call https://localhost:9090 the error with "This site can’t be reached" occurs. Whatsapp business api does not have good documentation or tutorials till now. So this site is the only last way for me.
I had a similar problem which could be your case, I saw the docker containers OK but nothing was working. After a day searching I saw where it happened, my problem was I installed mysql MANUALLY (not docker container) in the same instance where docker is running and in db.env I just used 127.0.0.1, this was passed literally to docker container, then looking at a the wait_on_mysql.sh script, the whastapp docker containers were waiting util the mysql ip has conectivity to actually do something and was printing "MySQL is not up yet - sleeping" each second, of course they wouldn't find any conectivity.
Since my instalation is for development, and I am already using such database to other stuff, my solution was to use the 172.17.0.1(docker gateway of the containers) IP instead, then add two sets of network iptables rules to the host to redirect from the docker containers IP to the IP binded by mysql when using such port (3306, the default in my case). After that everything works well. I think there are better solutions, but I didn't want to go far on it, you should evaluate you case if apply.
check the command:
docker-compose logs > debug_output.txt
That gives you insight about whats happening, hope it can helps someone.
I think your setup is already complete. You just need to start with the registration process and start sending messages. The containers are up and running but calling https://localhost:9090 won't send you any response as this is not any specified API endpoint expected to be used.
Since you're using prod single instance, the documentation can be found here which seems pretty straight forward. https://developers.facebook.com/docs/whatsapp/installation/prod-single-instance
You seem to have completed till the 7 steps. The next step can be to perform a health check to make sure it is healthy. The API endpoint for that would be https://localhost:9090/v1/health https://developers.facebook.com/docs/whatsapp/api/health
Has your db also been setup?
I cannot see it in the docker screenshot.
Also - you have to accept the certificate, as it does not have a public CA issues certificate.

How can I connect to MongoDB Atlas using Robomongo?

I signed up freely at MongoDB Atlas and created cluster now I want to know how can I create database and connect to that using Robomongo?
1) (Atlas Mongodb console)First of all click on ALLOW ACCESS FROM ANYWHERE(see in below image) and put some random IP address , don't click on Add Current IP Address otherwise it will not connect with robomongo .
2) Now open robomongo ,select connection Tab and then select type Direct Connection , and put your primary cluster in Address [you can get your Primary Cluster Address from Project->Clusters->(choose) Primary Cluster-> "There you will find your Primary Cluster Address"] .
3)now click on Authentication Tab , put database name is admin and put your username and password , Auth Mechanism is SCRAM-SHA-1.
4) select self-signed certificate as Authentication Method
5) Now , click on test,we are done !
The standard Mongo URI connection schema has the form:
mongodb://[username:password#]host1[:port1][,...hostN[:portN]]][/[database][?options]]
Security Reasons
Do not allow access everywhere for security reasons
Restrict to your IP address
Connect via roboMongo 3T using a secondary cluster node from MongoDB Atlas
In case it helps others, Robo3Tversion 1.3 and greater has a "From SRV" field where you can paste the SRV connection string and it fills out the connection options correctly for you. As of 1.3 it looks like this:
As of writing, you can get the connection string by clicking the "connect" button next to your cluster dashboard's graphs, and then clicking "Connect your application", and you get a screen like this with the connection string that you can copy:
#kdblue, It's not working for me. But when I tried using the replica set, I could able to connect successfully.
Robo 3T Version: 1.2.1
Steps followed:
In your MongoDB Atlas(cloud.mongodb.com), copy all the three replica sets name and note it down. (Refer an image for reference, the replica sets denoted in the orange box).
Now, in your Robo 3T, in Connection tab, select type as Replica Set.
Provide a suitable name for your connection.
And now in Members, add all the three copied replica sets. Refer image for details.
Provide authentication, if you have any and follow SSL steps (mandatory) as suggested by #kdblue in the previous answer.
You could able to connect successfully now.
Thank you.
[Updated]
It is now possible to connect to Mongo Atlas 3.4 free cluster with the latest beta: Robomongo 1.1 - Beta version with MongoDB 3.4 Support
Direct connections do not work with Replica Sets and Robo3T.
And the cluster you create on Atlas is a 3-Node replica set.
Select Connection Type: Replica Set on the first tab
To find out 3 members in new Atlas dashboard:
click on Clusters in your Atlas dashboard.
click collections button on the cluster.
click Overview tab on the next menu.
you will see the list of your set (primary and two secondary).
then follow #Balasubramani M's answer.
If you have the "TLS" instead of the "SSL" tab, don't get crazy.
Just do exactly the same that you would do with "SSL":
Mark the "Use TLS protocol" checkbox
Choose the "Self-signed Certificate" authentication method option
And that's all!
Instead of connecting it with robomongo I would recommend you to connect it with COMPASS. That is a opensource GUI tool for connecting to your MongoDB Atlas deployment and it is supported by MongoDB people also.
You can download compass from https://www.mongodb.com/download-center/compass.
Additionally many functionalities are not supported in robomongo.
Robo mongo is the 3rd party tool so even if you go the mongodb people they will not support.
Instruction for connecting your atlas cluster with compass can be found in the documentation https://docs.atlas.mongodb.com/compass-connection/
However, even after following my response you encounter any issue, let me know , I will help you further.
No matter what I tried it wouldn't work, all I had to end up doing was update to the latest version at which point my old connection setup worked fine.
https://robomongo.org/download
Tip: I struggled updating a connection, no dice.
Created one form scratch using above and connected on first attempt.

Apache CloudStack: No templates showing when adding instance

I have setup the apache cloudstack on CentOS 6.8 machine following quick installation guide. The management server and KVM are setup on the same machine. The management server is running without problems. I was able to add zone, pod, cluster, primary and secondary storage from the web interface. But when I tried to add an instance it is not showing any templates in the second stage as you can see in the screenshot
However, I am able to see two templates under Templates link in web UI.
But when I select the template and navigate to Zone tab, I see Timeout waiting for response from storage host and Ready field shows no.
When I check the management server logs, it seems there is an error when cloudstack tries to mount secondary storage for use. The below segment from cloudstack-management.log file describes this error.
2017-03-09 23:26:43,207 DEBUG [c.c.a.t.Request] (AgentManager-Handler-
14:null) (logid:) Seq 2-7686800138991304712: Processing: { Ans: , MgmtId:
279278805450918, via: 2, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":
{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
GetRootDir for nfs://172.16.10.2/export/secondary failed due to
com.cloud.utils.exception.CloudRuntimeException: Unable to mount
172.16.10.2:/export/secondary at /mnt/SecStorage/6e26529d-c659-3053-8acb-
817a77b6cfc6 due to mount.nfs: Connection timed out\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.getRootDir(Nf
sSecondaryStorageResource.java:2080)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.execute(NfsSe
condaryStorageResource.java:1829)\n\tat
org.apache.cloudstack.storage.resource.NfsSecondaryStorageResource.executeReques
t(NfsSecondaryStorageResource.java:265)\n\tat
com.cloud.agent.Agent.processRequest(Agent.java:525)\n\tat
com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:833)\n\tat
com.cloud.utils.nio.Task.call(Task.java:83)\n\tat
com.cloud.utils.nio.Task.call(Task.java:29)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\
n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\
n\tat java.lang.Thread.run(Thread.java:745)\n","wait":0}}] }
Can anyone please guide me how to resolve this issue? I have been trying to figure it out for some hours now and don't know how to proceed further.
Edit 1: Please note that my LAN address was 10.103.72.50 which I assume is not /24 address. I tried to give CentOs a static IP by making the following settings in ifcg-eth0 file
DEVICE=eth0
HWADDR=52:54:00:B9:A6:C0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.10.2
NETMASK=255.255.255.0
GATEWAY=172.16.10.1
DNS1=8.8.8.8
DNS2=8.8.4.4
But doing this would stop my internet. As a workaround, I reverted these changes and installed all the packages first. Then I changed the IP to static by the same configuration settings as above and ran the cloudstack management. Everything worked fine untill I bumped into this template thing. Please help me figure out what might have went wrong
I know I'm late, but for people trying out in the future, here it goes:
I hope you have successfully added a host as mentioned in Quick Install Guide before you changed your IP to static as it autoconfigures VLANs for different traffic and creates two bridges - generally with names 'cloud' or 'cloudbr'. Cloudstack uses the Secondary Storage System VM for doing all the storage-related operations in each Zone and Cluster. What seems to be the problem is that secondary storage system vm (SSVM) is not able to communicate with the management server at port 8250. If not, try manually mounting the NFS server's mount points in the SSVM shell. You can ssh into the SSVM using the below command:
ssh -i /var/cloudstack/management/.ssh/id_rsa -p 3922 root#<Private or Link local Ip address of SSVM>
I suggest you run the /usr/local/cloud/systemvm/ssvm-check.sh after doing ssh into the secondary storage system VM (assuming it is running) and has it's private, public and link local IP address. If that doesn't help you much, take a look at the secondary storage troubleshooting docs at Cloudstack.
I would further recommend, if anyone in future runs into similar issues, check if the SSVM is running and is in "Up" state in the System VMs section of Infrastructure tab and that you are able to open up a console session of it from the browser. If that is working go on to run the ssvm-check.sh script mentioned above which systematically checks each and every point of operation that SSVM executes. Even if console session cannot be opened up, you can still ssh using the link local IP address of SSVM which can be accessed by opening up details of SSVM and than execute the script. If it says, it cannot communicate with Management Server at port 8250, I recommend you check the iptables rules of management server and make sure all traffic is allowed at port 8250. A custom command to check the same is nc -v <mngmnt-server-ip> 8250. You can do a simple search and learn how to add port 8250 in your iptables rules if that is not opened. Next, you mentioned you used CentOS 6.8, so it probably uses older versions of nfs, so execute exportfs -a in your NFS server to make sure all the NFS shares are properly exported and there are no errors. I would recommend that you wait for the downloading status of CentOS 5.5 no GUI kvm template to be complete and its Ready status shown as 'Yes' before you start importing your own templates and ISOs to execute on VMs. Finally, if your ssvm-check.sh script shows everything is good and the download still does not start, you can run the command: service cloud restart and actually check if the service has gotten a PID using service cloud status as the older versions of system vm templates sometimes need us to manually start the cloud service using service cloud start even after the restart command. Restarting the cloud service in SSVM triggers the restart of downloading of all remaining templates and ISOs. Side note: the system VMs uses a Debian kernel if you want to do some more troubleshooting. Hope this helps.

Replica Sets with MongoVUE

I want to get MongoVUE set up to work with replica sets. I have followed the instructions found in the link below at step 2(b).
http://www.mongovue.com/2012/03/26/establishing-connections-to-servers-and-replica-sets-using-mongovue/
However all I get is "Connection Refused"
See image: http://snag.gy/PqXmQ.jpg
All instances are running - as windows services if that helps - and as you can see from the image they are all part of a replica set. The notepad at the bottom shows the full text string I have used for the server.
Thanks,
Matt
Short answer: Use IP addresses...
I'm just looking back at issues I've had in the past. This looks like it was resolved quite simply by using IP addresses instead of host names.
HTH