I am trying to configure the database secrets engine in vault for dynamic credentials generation. During which even though I have provided the custom valid port for SQL server, looks like vault is picking up the default port (by ignoring the custom port) provided in a command.
Please refer to the capture
Could somebody help in configuring the vault database secret engine to use custom port.
Text version of the attached image:
C:\WINDOWS\system32>vault write database/config/my-mssql-database
plugin_name=mssql-database-plugin
connection_url='sqlserver://{{username}}:{{password}}#localhost\sql2017:64062'
allowed_roles="my-role" username="vaultuser" password="******"
Error writing data to database/config/my-mssql-database: Error making
API request.
URL: PUT http://127.0.0.1:8200/v1/database/config/my-mssql-database
Code: 400. Errors:
error creating database object: error verifying connection: Unable to
open tcp connection with host 'localhost:1433': dial tcp
127.0.0.1:1433: connectex: No connection could be made because the target machine actively refused it.
I'm not sure why you're using a backslash in your database URL, but you are putting the port in the wrong place - it needs to come immediately after the domain portion of the URL (and before the path). Instead of
connection_url='sqlserver://{{username}}:{{password}}#localhost\sql2017:64062'
try
connection_url='sqlserver://{{username}}:{{password}}#localhost:64062/sql2017'
Related
SUMMARY
I have installed zabbix on OpenShift cluster. I am trying to monitor a host(vm) outside the cluster but the zabbix server is unable to connect to it. In the /etc/zabbix/zabbix_agentd.conf file I have mentioned the DNS name of the server zabbix-server but it looks like there server is trying to connect through a different public IP. I am not sure what this IP is.
OS / ENVIRONMENT / Used docker-compose files
I applied the kubernetes.yaml file present in this repo - https://github.com/zabbix/zabbix-docker/blob/6.2/kubernetes.yaml - on an OpenShift cluster.
CONFIGURATION
In the /etc/zabbix/zabbix_agentd.conf file Server=zabbix-server.
STEPS TO REPRODUCE
Apply the kubernetes.yaml file on Openshift cluster and try to monitor any external vm.
EXPECTED RESULTS
The zabbix server should be able to connect to the vm.
ACTUAL RESULTS
Zabbix server logs.
Defaulted container "zabbix-server" out of: zabbix-server, zabbix-snmptraps
\*\* Updating '/etc/zabbix/zabbix_server.conf' parameter "DBHost": 'mysql-server'...added
287:20230120:060843.131 Zabbix agent item "system.cpu.load\[all,avg5\]" on host "Host-C" failed: first network error, wait for 15 seconds
289:20230120:060858.592 Zabbix agent item "system.cpu.num" on host "Host-C" failed: another network error, wait for 15 seconds
289:20230120:060913.843 Zabbix agent item "system.sw.arch" on host "Host-C" failed: another network error, wait for 15 seconds
289:20230120:060929.095 temporarily disabling Zabbix agent checks on host "Host-C": interface unavailable
Logs from the agent installed on the vm.
350446:20230122:103232.230 failed to accept an incoming connection: connection from "9.x.x.219" rejected, allowed hosts: "zabbix-server"
350444:20230122:103332.525 failed to accept an incoming connection: connection from "9.x.x.219" rejected, allowed hosts: "zabbix-server"
350445:20230122:103432.819 failed to accept an incoming connection: connection from "9.x.x.210" rejected, allowed hosts: "zabbix-server"
350446:20230122:103533.114 failed to accept an incoming connection: connection from "9.x.x.217" rejected, allowed hosts: "zabbix-server"
If I add this IP in /etc/zabbix/zabbix_agentd.conf it will work. But what IP is this? Is this a service? Or any node/pod IP? It keeps on changing. Everytime I cannot change this id in the conf file. I need something more stable.
Kindly help me out with this issue.
So I don't know zabbix. So I have to make some educated guesses both in how the agent works and how the server works.
But, to summarize, unlike something like docker compose where you are running the zabbix server on a known server, in Openshift/Kubernetes you are deploying into a cluster of machines with their own networking. In other words, the whole point of OpenShift is that OpenShift will control where the application's pod gets deployed and will relocate/restart that pod as needed. With a different IP every time. (And the DNS name is meaningless since the two systems aren't sharing DNS anyway.) Most likely the IP's you are seeing are the pod's randomly assigned IPs.
So, what are you to do when you have a situation like yours where an external application requires a predicable IP? Well, option 1, is to remove that requirement. Using something like a certificate is obviously more secure and more reliable than depending on an IP anyway. But another option is to use an egress IP. This is a feature of OpenShift where you essentially use a proxy to provide an external application with a consistent IP.
Our dev ops team have whitelisted my home ip address so that I can connect to our Postgres database on Azure. I am able to connect to our Azure database due to this.
Today I set up a VM in order to run Docker. I am running a container for RStudio which is an app that, among many other things, allows me to connect to our database using ODBC.
After configuring the odbcinst and odbc.ini files I believe that those are configured correctly because when I try to connect I get the following error:
Error: nanodbc/nanodbc.cpp:983: 00000: FATAL: SSL connection is required. Please specify SSL options and retry.
Thus I think that my odbc set up is correct because this error suggests my connection setting are fine, it's just that Azure will not allow it without SSL.
Searching that error message took me to this SO post with the following accepted answer:
By default, Azure Database for PostgreSQL enforces SSL connections between your server and your client applications to protect against MITM (man in the middle) attacks. This is done to make the connection to your server as secure as possible.
Although not recommended, you have the option to disable requiring SSL for connecting to your server if your client application does not support SSL connectivity. Please check How to Configure SSL Connectivity for your Postgres server in Azure for more details. You can disable requiring SSL connections from either the portal or using CLI. Note that Azure does not recommend disabling requiring SSL connections when connecting to your server.
My question is, if I am already able to connect to our database outside of my VM due to my home IP being whitelisted and just using a Postgres Driver with Dbeaver SQL client, is there anything I can do to connect from within my VM?
I can get my VMs ip address but I suspect (am not sure) if sending hat to our developers to whitelist would work?
Is there a prescribed course of action here?
I added this parameter to my .odbc.ini file and was able to connect:
sslmode=require
From Azure Postgres documentation, this parameter may take on different permutations depending on the context
"for example "ssl=true" or "sslmode=require" or "sslmode=required" and other variations"
I'm trying to connect to Google Cloud SQL from my machine (Ubuntu) using this command:
mysql --host='Public IP' --user='' --password
However, I'm getting this error:
ERROR 2003 (HY000): Can't connect to MySQL server on 'Public IP' (110)
I need any help resolving my issue.
First you need to let the Cloud SQL instance which IP addresses it can accept. You can do that without SSL by following the instructions here. However, to be more secure, I would recommend you using SSL. More info on that here.
Probably the easiest way to securely connect from your local machine to a public ip of a cloud SQL instance is to download and use the proxy, following the instructions here:
https://cloud.google.com/sql/docs/mysql/connect-admin-proxy
What you have to do is add a network to the public ip section, under the connections tab after selecting your Cloud SQL instance.
See Cloud SQL Connections Tab here
So, for the name input you put firstname-lastname kind of thing to denote whose ip it is. Then input your IP address 1.2.3.4/32 into the network input.
After doing so and saving you will be able to connect.
Yes, you can add SSL and use certificates. That is all best practice and what should be done for a production stack. But if this is just getting off the ground and in rapid development, that's all you need to do in the beginning.
Hopefully I'm doing something wrong, I've read all documentation and scoured forums but can't seem to get to the bottom of an issue I'm experiencing. I'm using OSX btw.
Things that are working:
Connect to cloud SQL from local OS using proxy via either TCP or Socket
Connect to cloud SQL from local OS using proxy in container via TCP
Connect to cloud SQL from GKE using proxy in the same pod via TCP
Things that are not working:
Connect to cloud SQL from local OS using proxy in contain via sockets
Connect to cloud SQL from GKE using proxy in the same pod via socket
I suspect both of these problems are actually the same problem. I'm using this command to run the proxy inside of the container:
docker run -v [PATH]:/cloudsql \
gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy -dir=/cloudsql \
-instances=[INSTANCE_CONNECTION_NAME] -credential_file=/cloudsql/[FILE].json
And the associated socket is being generated with the directory. However when I attempt to connect I get the following error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/cloudsql/node-sql:us-central1:nodedb' (61)
The proxy doesn't generate a new line when I try to connect which makes me think that it's not receiving the request, it simply says Ready for new connections and waits.
Any idea what's going wrong, or how I could troubleshoot this further?
For "Connect to cloud SQL from GKE using proxy in the same pod via socket" can you please follow the tutorial at https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine? We have a working WordPress example there that has the cloudsql-proxy as a sidecar container (i.e. in the same Pod, but over TCP).
I don't think you can do "in the same pod via socket" unless you’re running multiple processes in a single container (which you shouldn’t as a best practice). If you do a sidecar container, you can use TCP, so you don’t need a unix socket (moreover, I'm not sure how you’d share files between containers of a Pod).
Also, the docker run -v /local.sock:/remote.sock (I think) will be creating a file/directory locally as /local.sock and making that available inside the container as /remote.sock. This might not work because the docker-engine doesn't know that /local.sock is meant to be a Unix socket and it creates a regular file.
I need to deploy a Azure Worker Role with input endpoint on port 21 so that it can accepts incoming FTP connections.so that i should be able to connect to worker role through FTP Client like Filezilla and access the azure blob storage.
for this i was able to implement FTP commands like LIST,RETR,STOR,PORT,USER and PASS.All these works fine with Active mode of FTP.
But when i switch to PASSIVE mode of FTP(execute PASV command to Azure Worker Role),I am finding the issue.Since i am newbie to Azure so not able to trace the problem..Going through few blogs got to know that since Azure Worker role are beyond the Load balancer so PASSIVE mode need configuration.I saw few blogs which talks about manual configuration of Web role for FTP..Since i am working on worker role, does configuration change and how can handle it in code and more over since we are not sure about which vm machine the role gonna deployed..how can i handle configuration
Ways i tried:
1.In the Azure Worker role,i set the following end points
FTP Input tcp 21
Endpoint1 Input tcp 1025
initially on Start(),I had this code on line
TcpListener server = SocketHelpers.CreateTcpListener(RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["FTP"].IPEndpoint);
and on PASV mode i had following
TcpListener server = SocketHelpers.CreateTcpListener(RoleEnvironment.CurrentRoleInstance.InstanceEndpoints[" Endpoint1"].IPEndpoint);
so that it opens on new port 1025 and send back to the client.while sending back to client i got exception as follows:
SocketErrorCode is 10053 and SocketErrorDesc:System.Net.Sockets.SocketError.ConnectionAborthhed
Unable to write data to the transport connection: An established connection was aborted by the software in your host machine.
2.other way by getting external IP address using http://checkip.dyndns.org/,if i get IPadress from this,do i need to get the port from code using
RoleEnvironment.CurrentRoleInstance.InstanceEndpoints[" Endpoint1"].IPEndpoint???
Really I am really confused with Azure stuff and FTP configuration.
I went through following articles but could not find how to configure programmatically worker role (setting the port range,retrieving from the code) to work on PASSIVE mode.
http://www.itq.nl/blogs/post/Walkthrough-Hosting-FTP-on-IIS-75-in-Windows-Azure-VM.aspx
http://angelolaris.blogspot.com/
Regards,
Vivek
First think i could confirm is that, ot sure if you are also starting the listener as below or now:
TcpListener myPortListener = new TcpListener(RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["MY_PORT"].IPEndpoint);
myPortListener.Start();
Next when you have above code in your worker role the Port start to take incoming request and any application which has binding to IP/Port will receive the packets.
IF you really want to understand it how to get it working in your Worker Role, what you can do is, following this guidance to setup in a Web Role first and then try to replicate same configuration in your worker role. It is little complex to do but first you would need to understand how things work and then you would be able to implement itself.
Also your requirement is not clear because I am not sure why do you need such configuration because you can connect directly to Azure Blob storage (if your data is located at Azure Blob storage) from a Worker Role and access the content, why having FTP/local connectivity to make it complex. May be if you revisit your application architecture, you don't need to do such work.