how to seed data to Riak using REST API from remote host - rest

I installed riak as per mentioned in the tutorial riak quick start.
I can upload/seed data to riak as described in REST API using curl client. The example curl commands as follows
curl -v -X PUT http://localhost:10018/riak/favs/db \
-H "Content-Type: text/html" \
-d "My new favorite DB is RIAK"
The same when I try to GET,
curl -i -X GET http://localhost:10018/riak/favs/db
HTTP/1.1 200 OK
Whereas when I try uploading/seeding data from another machine (remote machine) things do not work as expected.
curl -i -X GET http://10.0.77.81:10018/riak/stats
curl: (7) couldn't connect to host
But I could ping the host,
ping 10.0.77.81
PING 10.0.77.81 (10.0.77.81) 56(84) bytes of data.
64 bytes from 10.0.77.81: icmp_req=1 ttl=61 time=576 ms
64 bytes from 10.0.77.81: icmp_req=1 ttl=61 time=576 ms
Could connect to tomcat server,
hariharankumar#pc170233-ThinkCentre-M70e:~/softwares/riak-1.4.2/rel/riak$ curl -i -X GET http://10.0.77.81:8080
HTTP/1.1 200 OK
When connecting to riak host alone curl throws me error saying could not connect to host.

The cluster built in the Riak quick start is intended as a local development cluster and it is therefore by default set up to only accept connections from 127.0.0.1. You can change this in the app.config file for each node, which can be found in the /etc directory, and instead make it bind to e.g. 0.0.0.0.

Related

How to use port 5434 on main server for postgresql streaming replication

I am trying to do streaming replication between two postgresql servers. Main server is listening on port 5434 and I have to keep it so. When I run "pg_basebackup -h (main server ip) -D /var/lib/postgresql/13/main/ -U replicator -P -v -R -X stream -C -S slaveslot1" on replica server I get the follwing error:
"pg_basebackup: error: could not connect to server: Connection refused. Is the server running on host (main server ip) and accepting TCP/IP connections on port 5432?"
Almost all similar questions that I found in the web are dealing with some other problems as their main server is already using port 5432.
So, could you please let me know how I can keep port 5434 on main server and still run the above command for replication? Thanks in advance!
I was expecting the command to run normally and ask me for password.
I have changed the port to 5432 and in that case it works, so the command itself doesn't have mistakes in it.
But I don't know what/how I can do it if I am keeping port 5434.
You can either use the -p option of pg_basebackup, or you can set the PGPORT environment variable, or you can use the -d option with a connection string that contains port=5434.

PostgreSQL: Can't Connect to Socket on Host via Volume in Docker

This is seemingly the same as this issue, though I thought I'd provide a simple example:
docker run -it \
-v /pg_socket_on_host:/pg_socket_in_container \
-e PGPASSWORD=${PGPASSWORD} \
postgres \
psql -h /pg_socket_in_container -U postgres postgres
Where the path /pg_socket_on_host is a directory containing the file .s.PGSQL.5432. I've tried a few different versions of this, but I keep ending up with the same result:
psql: error: connection to server on socket "/pg_socket_in_container/.s.PGSQL.5432" failed: Connection refused
Is the server running locally and accepting connections on that socket?
Is there a reason that this is a problem with Docker?
Follow up:
I ensured that the permissions and the user (name and id, as well as group and id) for the host and container path/volume line up based on this post, but I still get the same error. I am able to connect to the socket on the host machine from the host machine. I am also able to connect to the host via host.docker.internal from the docker container. Any other ideas about debugging strategies?

Access local postgres database from podman container during build

I have a Spring Boot application that runs inside a container and needs to connect to a local Postgresql database.
Now it fails at build time, as that is the moment when it tries to configure Spring beans and to connect to database.
I have configured it like following:
spring
datasource:
initialization-mode: always
platform: postgres
url: jdbc:postgresql://192.168.122.1:5432/academy
username: myuser
password: mypassword
but it fails to connect.
How shall I configure Dockerfile/connection string?
I think you have at least two alternatives
Alternative 1: Connecting via Unix domain socket
If you could have Postgres listen on a Unix domain socket, you could then pass in that socket to the container with a bind-mount. Use podman run with one of the command-line arguments --volume or --mount.
Maybe something like:
--mount type=bind,src=/path/to/socket/on/host,target=/path/to/socket/in/container
If your system has SELINUX enabled, you would need to add the option Z
--volume /path/to/socket/on/host:/path/to/socket/in/container:Z
Alternative 2: Connecting via TCP socket
I think you could add the option
--network slirp4netns:allow_host_loopback=true
to the podman run command and connect to the IP address 10.0.2.2.
Quote "Allow the slirp4netns to reach the host loopback IP (10.0.2.2, which is added to /etc/hosts as host.containers.internal for your convenience)" from the podman run man page.
See also slirp4netns.1.md.
(The IP address 10.0.2.2 is a default value hard coded in the source code of slirp4netns).
Here is an example of a container that connects to a web server running on localhost:
esjolund#laptop:~$ curl -sS http://localhost:8000/file.txt
hello
esjolund#laptop:~$ cat /etc/issue
Ubuntu 20.04.2 LTS \n \l
esjolund#laptop:~$ podman --version
podman version 3.0.1
esjolund#laptop:~$ podman run --rm docker.io/library/fedora cur-l -sS http://10.0.2.2:8000/file.txt
curl: (7) Failed to connect to 10.0.2.2 port 8000: Network is unreachable
esjolund#laptop:~$ podman run --rm --network slirp4netns:allow_host_loopback=true docker.io/library/fedora curl -sS http://10.0.2.2:8000/file.txt
hello
esjolund#laptop:~$

how to re-install Kerberos client for ambari while wizard interrupt?

The story was like this:
I wanted to enable the Kerberos service in ambari.I configured the server node kdc configuration but forgot to sync them to slave nodes. Then ran the wizard to enable kerberos, it got a failure after had already installed kerberos clients by the first step.
The error message shows that the client use admin#12 which realm is defalult by install kerberos server to client the kerberos server, while I configured the realm is EXAMPLE.COM. After I synced the configuration and re-ran the wizard, it has still shown the error.
I had tried every method to re-do. And also checked the operation in a new ambari environment.I guess the wrong realm is cached in the kerberos client. And every time re-run of the wizard, it skipped the install client step while it had been installed.
So, I come here to ask if there is a way to re-install the kerberos client.
The only option is to cleanup Kerberos completely and try enabling it again. Please use these set of cURL commands to clean up residual Kerberos configuration from Ambari (follow the sequence):
curl -H "X-Requested-By:ambari" -u admin:admin -i -X delete http://localhost:8080/api/v1/clusters/bahubali/hosts/bali1.openstacklocal/host_components/KERBEROS_CLIENT
curl -H "X-Requested-By:ambari" -u admin:admin -i -X delete http://localhost:8080/api/v1/clusters/bahubali/hosts/bali2.openstacklocal/host_components/KERBEROS_CLIENT
curl -H "X-Requested-By:ambari" -u admin:admin -i -X delete http://localhost:8080/api/v1/clusters/bahubali/hosts/bali3.openstacklocal/host_components/KERBEROS_CLIENT
curl -H "X-Requested-By:ambari" -u admin:admin -i -X GET http://localhost:8080/api/v1/clusters/bahubali/services/KERBEROS/components/KERBEROS_CLIENT
curl -H "X-Requested-By:ambari" -u admin:admin -i -X DELETE http://localhost:8080/api/v1/clusters/bahubali/services/KERBEROS/components/KERBEROS_CLIENT
curl -H "X-Requested-By:ambari" -u admin:admin -i -X GET http://localhost:8080/api/v1/clusters/bahubali/services/KERBEROS
curl -H "X-Requested-By:ambari" -u admin:admin -i -X DELETE http://localhost:8080/api/v1/clusters/bahubali/services/KERBEROS

Unable to Bootstrap node using Chef

I've set up a basic Chef infrastructure that contains a workstation, a hosted Chef Server and an Ubuntu Server to serve as a node. I'm using this setup at my workplace and therefore a proxy is required for internet connections. I've made the necessary proxy settings in both knife.rb and the Ubuntu Server. Both the workstation and the node are properly connected to the internet.
Here's the problem - When I try to bootstrap this node using knife, I get the following error:
<My Node's IP> --2014-02-12 10:29:05-- https://www.opscode.com/chef/install.sh
<My Node's IP> Resolving www.opscode.com (www.opscode.com)... 184.106.28.91
<My Node's IP> Connecting to www.opscode.com (www.opscode.com)|184.106.28.91|:443... failed: Connection refused.
<My Node's IP> bash: line 83: chef-client: command not found
Please note that I used the following command to bootstrap the node -
knife bootstrap <My Node's IP> --sudo -x <username> -P <password> -N <name>
Can you please help me with this?
Thanks in advance.
After struggling on this for a long time I have finally found the answer.
In knife.rb another entry for bootstrap-proxy has to made as well.
knife[:bootstrap_proxy] = "http://username:password#proxy:port"
After doing this, run the following bootstrap command -
knife bootstrap <My Node's IP> --sudo -x <username> -P <password> -N <name>
This worked for me!
I have encountered the same problem. You just need to type the same thing with some extra commands:
knife bootstrap <My Node's IP> --sudo -x <username> -P <password> -N <name> --bootstrap-wget-options --no-check-certificate
It will work always.
In my case, I didn't added the server in client's hosts file entry. for example,
I got this error "Connection refused - Connection refused connecting to https://server.com/organizations/sample/nodes/node1"
I simple made an entry in "/etc/hosts" file with my server's IP and name i.e "server.com" and it worked for me.
vi /etc/hosts
192.168.159.100 server.com