My Google cloud instance always in Runnable state and going to running state - google-cloud-sql

I have created a google cloud sql instance.Following are the configurations set:
IPv4 address -> 173.194.247.217
IPv6 address ->2001:4860:4864:1:3f64:544:3d9d:32ca
Database version -> MySQL 5.5
Region-> Asia
Backup window-> 2:30 AM — 6:30 AM
Binary log-> Disabled
File system replication-> Synchronous
Preferred location-> None
Tier-> D0
Pricing plan-> Package
My problem is that the instance is always in RUNNABLE state, even after restarting. Also, I have changed the Activation Policy to "always ON".
Due to this, I am not able to ping the the IPv4 address (173.194.247.217) and so not also able to connect via mysql-client or similar application.
Please Help!!

In order to connect with a Cloud SQL instance via IP, first a password has to be defined for the root account, and in order to do that, you must limit the IPs (either individual (/32) IPs, or whole IP ranges) that can connect to the instance. Therefore:
1: Go to your Cloud SQL instance in the Developers Console, and click "ACCESS CONTROL" at the top of the page;
2: In the Authorized Networks (Optional) section, click "Add Authorized Network", and enter at least the sub-net you'll be connecting from;
3: In the Set Root Password section, enter a strong password, then click the "Set" button to the right. Lastly, save your changes.

Related

Cannot connect to MongoDB Atlas Cluster: DNSHostNotFound

I created a new cluster in MongoDB Atlas but I can't connect to it thru the mongo shell.
C:\git_symphony\esp8266\SymphonySocket>mongo "mongodb+srv://<clustername>-gy7bf.azure.mongodb.net/test" --username <USERNAME>
DNSHostNotFound: Failed to look up service "":No records found for given DNS query.
try 'mongo --help' for more information
I tried switching regions but it didn't work. I've also tried using Compass on my mac but it just loads indefinitely when I try to connect. What could possibly be wrong?
Turns out, my ISP blocks all connections to MongoDB for some reason. I haven't contacted them yet, but I find this very silly as I racked my brains out trying to solve this when the problem wasn't at all in my control.
I also had this problem with Comcast Xfinity. DHCP sets DNS servers that would not lookup the mongodb connections. I'm running KDE Neon Linux (Ubuntu 18.04). In order to get things working I had to supersede the domain-name-servers supplied through comcast. I used Google's public DNS, but there are others that can be used. I had to edit (you'll need root permissions) the /etc/dhcp/dhclient.conf file, and added to following line:
supersede domain-name-servers 8.8.8.8, 8.8.4.4;
I hope this helps somebody, took me too long to figure it out. :-)
I just found this post by M. Brandao with the fix for Windows users:
Open the Control Panel.
Click View network status and tasks
Click Change adapter settings on the left portion of the window.
Double-click the icon for the Internet connection you're using.
Click the Properties button.
Click and highlight Internet Protocol Version 4 (TCP/IPv4) and click
Properties.
If not already selected, select the Use the following DNS server
addresses option.
Enter the new DNS addresses (see above) and click OK and close out of all otherwindows.
Have you whitelisted your IP address ? IP whitelisting is important otherwise it will not connect.
Have you created this cluster recently and is it in the europe region ?
Is the cluster properly deployed or you are experiencing any issue in the deployment of the cluster ?

Sophos UTM VPN not accessible

I used the Sophos UTM 9.510 ha_standalone Cloudformation template (https://github.com/sophos-iaas/aws-cf-templates/blob/master/utm/9.510/standalone.template) and used defaults when possible. I did not use an existing ElasticIP, so it created it's own at (scrubbed) 50.12.12.123.
I gave a hostname at (for example) vpn.example.com and after creation, I created an A record for vpn.example.com to point to 50.12.12.123.
I don't have a license and just pay hourly for the AMI.
I understand that I should be able to hit https://vpn.example.com:4444 or https://50.12.12.123:4444 to see the admin panel. However, it times out and doesn't load anything.
When I deployed the stack, I got an email at the admin email I provided and it said REST daemon not running - restarted. I assume it restarted fine, since I have received no new emails, and the EC2 instance is running.
Has anyone else experienced this? Is there a step I'm missing? Aside from creating the Route53 record, I thought the Cloudformation Template should just work right out of the box.
The default security groups blocked traffic. I modified one of them to accept all traffic and the dashboard became accessible. I will now refine access further.

AWS RDS for PostgreSQL cannot be connected after several hours

I created several instances of RDS with PostgreSQL and get the same problems:
I can connect to all of them right after creating the instances.
After several hours (I stop working on it, turn off my laptop), I cannot connect to any of them again.
I use DBeaver for the connections, the error show is "Connection attempt timed out."
I attached the information of the
. Hope someone can help me with this problem. Thank you in advance.
Finally, I found the answer for my problem. For the error of "connection timeout", one of the reasons can be from the security settings. Although I set it as public when creating the RDS instance, the instance is attached with a private VPC security group which is not exposed public.
We can attach the RDS instance with a public security group inside the VPC (I don't think it is a good setting, just for the beginner in AWS like me) as below:
from Services, select EC2, select Security Groups in the left panel.
click "Create Security Group" button.
in the dialog, enter the name for the Group, e.g "postgres-public-access"
in the dialog, click "Add Rule" button.
In the "Type" column, select "PostgreSQL" or other types of RDS instances (or you can input the port of your RDS instance, usually it is 5432 for Postgres).
In the "Source" column, enter "0.0.0.0/0".
Click "Save" button.
from Services, select RDS, select the RDS instance, click "Modify" button.
In "Network & Security", "Security group", select the VPC Security Group you just created, in my case, it is "postgres-public-access".
Click "Continue" button.
Now you can go ahead and connect with your database everywhere.
I had to add/edit a rule to the VPC to allow connections from All sources.
Steps:
Go to DB > Connectivity & security > click on VPC(vpc-
Under Security > Security Groups > open sg-[something] for which VPC
ID matches the DB VPC
Inbound Rules > Edit Rules > Change Source to anywhere
So it seems that even when creating the DB and selecting allow public access, it only includes the traffic from withing the VPC. By doing the above steps you can allow access to all sources.
I just followed the guide: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html
Run through the typical things:
Make sure the Database is Public! Check in the AWS Website Console, if its Private make it public.
Check you have the Firewall port open for the software and the port you're trying to connect through.
When you create a dB in RDS a Security Group is created automatically with the Rule All, All:
You can add a rule for TCP Port 5432, like I have above.
Check Username/Password - sometimes incorrect ones get cached.
Try to ping the dB to see if its a internet connection problem.
I faced the same issue and it end up because of the VPN am using, when i disconnected the VPN i apply to connect.
Select DB -> Modify -> Connectivity-> Save

How can I setup a cell and collective in Bluemix

I'm trying to setup a cell and a collective in a WAS for bluemix service. I've found a few steps online for generic liberty setup, but nothing specific for a bluemix collective or cell. Can someone point me in the right direction?
At a high level, you should be able to do the following for a Cell:
Login to the Admin Console as wsadmin
Create a server.
Open all the ports on each host for each server created by running the openFirewallPorts.sh script. Below, you will find the standard ports for a new server given that only one server exists on each host You may need to open more ports for additional servers on the same host since ports can be unique per server. Try the following:
cd WAS_HOME/virtual/bin
export serverPorts=2810:TCP,2810:UDP,8880:TCP,8880:UDP,9101:TCP,9101:UDP,9061:TCP,9061:UDP,9080:TCP,9080:UDP,9354:TCP,9354:UDP,9044:TCP,9044:UDP,9443:TCP,9443:UDP,5060:TCP,5060:UDP,5061:TCP,5061:UDP,11005:TCP,11005:UDP,11007:TCP,11007:UDP,9633:TCP,9633:UDP,7276:TCP,7276:UDP,7286:TCP,7286:UDP,5558:TCP,5558:UDP,5578:TCP,5578:UDP
sudo ./openFirewallPorts.sh -ports $serverPorts -persist true
Start your server.
Deploy your application.
There are a few slight differences for a Liberty Collective, but again, at a high level, you should be able to try the following:
Switch your user to wsadmin or ssh to your host using wsadmin / password
On each host, create a server and join it to the collective. Be sure to use the full host name of the controller for the --host parameter.
cd WAS_HOME/bin
./server create server
./collective join server --host=yourhostname --port=9443 --user=wsadmin --password=xxxxxxxx --keystorePassword=yyyyyyyy
Accept the chain certificate (y/n) y
Save the output from each join so you can paste it into each host's application server.xml file before deploying your application.
Install the features required by your application on each host. The features listed below are an example.
cd /opt/IBM/WebSphere/Liberty/bin
./featureManager install --acceptLicense ejblite-3.2 websocket-1.0 jsp-2.3 jdbc-4.1 jaxrs-2.0 cdi-1.2 beanValidation-1.1
NOTE: Output from this command will contain messages similar to:
chmod: changing permissions of
`/opt/IBM/WebSphere/Liberty/bin/featureManager': Operation not
permitted
This is OK. You should see this message upon completion:
Product validation completed successfully.
Update your application's server.xml file with the information saved in Step 2.
Start your server.
Deploy your application.
Verify your application is reachable :9080/appname

SQL Studio Management - How to run queries across multiple servers

My 2 server both use SQL Server 2008 R2
I have my local SQL server and also an Amazon machine running an instance of SQL-Server there.
I'm able to connect from my local machine to that Amazon SQL using the standard 10.10.10.10, 1433 connection from my local Management Studio.
What i need to do now is to run a query that says ..tells me what records I have locally that are not on the Amazon server right now.
Something like:
SELECT *
FROM [LOCAL].dbo.Table1
WHERE Field1 NOT IN
(SELECT Field1 FROM [AMAZON].Database1.dbo.Table1)
================================
Question:
I don't know how to write the "AMAZON" location on the Query window itself, since it's running on a different server.
Any help is truly appreciated !!!
You have to configure AMAZON Server as LINKED Server on your local machine. If you name it "AMAZON" - you query will work exactly as you wrote.
In SSMS, \Server Objects\Linked Servers. Right click, 'new linked server'. Name your server, and choose 'SQL server' radio button. Because I was authorized user on both machines with windows credentials, I selected 'Be made using the login's current security context' radio button under the security tab, and did not even have to fool with the local/remote user mappings.
In order to be able to run queries across multiples servers, a link (linked Server) must be established between the 2 Servers. To create a linked server,
Navigate to the Linked Server Sub-folder under the Server Object folders
Right Click on the Linked Server Folder
Click on New Linked Server
Supply the Connection Strings for the Server
Name your Linked Server.
You can now use the full object qualification (LinkedServer.Database.tableOwner.Table) to access the objects.
Good Luck !
You should open your registered server window and create a group for your servers. then you right click the group name and select new query (Or select several servers in that group). if you execute the query it will rung against the servers selected.