OpenSearch 401 for /_bulk - opensearch

I am using fluent bit to stream logs from Kubernetes to OpenSearch (AWS). I have deployed via the Helm charts and have configured the output as below
[OUTPUT]
Name opensearch
Match *
Host aws-domain-name.region.es.amazonaws.com
Port 443
Index k8s-index
Type my_type
tls on
tls.verify off
HTTP_User redacted
HTTP_Passwd redacted
This gives me the following error
[2022/09/27 11:52:19] [error] [output:opensearch:opensearch.0] HTTP status=401 URI=/_bulk
The user has been created in OpenSearch with all_access. This was originally deployed using IAM roles but was replaced with HTTP username and password to try and simplify the troubleshooting

OpenSearch uses basic authentication by default with username 'admin' and password 'admin'. Please try with this and see

The issue was to do with the fluentbit config. The updated config is
[OUTPUT]
Name opensearch
Match *
Host aws-domain-name.region.es.amazonaws.com
Port 443
AWS_Region eu-west-1
Index k8s-index
TLS on
AWS_Auth On

Related

cetic-nifi Invalid host header issue

Helm version: v3.5.2
Kubernetes version: v1.20.4
nifi chart version:latest : 1.0.2 rel
Issue: [cetic/nifi]-issue
I'm trying to connect to nifi UI deployed in kubernetes.
I have set following properties in values yaml
properties:
# use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
sensitiveKey: changeMechangeMe # Must to have minimal 12 length key
algorithm: NIFI_PBKDF2_AES_GCM_256
externalSecure: false
isNode: false
httpsPort: 8443
webProxyHost: 10.0.39.39:30666
clusterPort: 6007
# ui service
service:
type: NodePort
httpsPort: 8443
nodePort: 30666
annotations: {}
# loadBalancerIP:
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
# sessionAffinity: ClientIP
# sessionAffinityConfig:
# clientIP:
# timeoutSeconds: 10800
10.0.39.39 - is the kubernetes masternode internal ip.
When nifi get started i get follwoing
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/k8sadmin/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/k8sadmin/.kube/config
NAME: nifi
LAST DEPLOYED: Thu Nov 25 12:38:00 2021
NAMESPACE: jeed-cluster
STATUS: deployed
REVISION: 1
NOTES:
Cluster endpoint IP address will be available at:
kubectl get svc nifi -n jeed-cluster -o jsonpath='{.status.loadBalancer.ingress[*].ip}'
Cluster endpoint domain name is: 10.0.39.39:30666 - please update your DNS or /etc/hosts accordingly!
Once you are done, your NiFi instance will be available at:
https://10.0.39.39:30666/nifi
and when i do a curl
curl https://10.0.39.39:30666 put sample.txt -k
<h1>System Error</h1>
<h2>The request contained an invalid host header [<code>10.0.39.39:30666</
the request [<code>/</code>]. Check for request manipulation or third-part
t.</h2>
<h3>Valid host headers are [<code>empty
<ul><li>127.0.0.1</li>
<li>127.0.0.1:8443</li>
<li>localhost</li>
<li>localhost:8443</li>
<li>[::1]</li>
<li>[::1]:8443</li>
<li>nifi-0.nifi-headless.jeed-cluste
<li>nifi-0.nifi-headless.jeed-cluste
<li>10.42.0.8</li>
<li>10.42.0.8:8443</li>
<li>0.0.0.0</li>
<li>0.0.0.0:8443</li>
</ul>
Tried lot of things but still cannot whitelist master node ip in
proxy hosts
Ingress is not used
edit: it looks like properties set in values.yaml is not set in nifi.properties in side the pod. Is there any reason for this?
Appreciate help!
As NodePort service you can also assign port number from 30000-32767.
You can apply values when you install your chart with:
properties:
webProxyHost: localhost
httpsPort:
This should let nifi whitelist your https://localhost:

Kubernetes(Istio) Mongodb enterprise cluster: HostUnreachable: Connection reset by peer

I have Istio1.6 running in my k8 cluster. In the cluster I have also deployed sharded mongodb cluster with istio-injection disabled.
And I have a different namespace for my app with istio-injection enabled. And from the pod if I try to connect to the mongo I get this connection reset by peer error:
root#mongo:/# mongo "mongodb://mongo-sharded-cluster-mongos-0.mongo-service.mongodb.svc.cluster.local:27017,mongo-sharded-cluster-mongos-1.mongo-service.mongodb.svc.cluster.local:27017/?ssl=false"
MongoDB shell version v4.2.8
connecting to: mongodb://mongo-sharded-cluster-mongos-0.mongo-service.mongodb.svc.cluster.local:27017,mongo-sharded-cluster-mongos-1.mongo-service.mongodb.svc.cluster.local:27017/?compressors=disabled&gssapiServiceName=mongodb&ssl=false
2020-06-18T19:59:14.342+0000 I NETWORK [js] DBClientConnection failed to receive message from mongo-sharded-cluster-mongos-0.mongo-service.mongodb.svc.cluster.local:27017 - HostUnreachable: Connection reset by peer
2020-06-18T19:59:14.358+0000 I NETWORK [js] DBClientConnection failed to receive message from mongo-sharded-cluster-mongos-1.mongo-service.mongodb.svc.cluster.local:27017 - HostUnreachable: Connection reset by peer
2020-06-18T19:59:14.358+0000 E QUERY [js] Error: network error while attempting to run command 'isMaster' on host 'mongo-sharded-cluster-mongos-1.mongo-service.mongodb.svc.cluster.local:27017' :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2020-06-18T19:59:14.362+0000 F - [main] exception: connect failed
2020-06-18T19:59:14.362+0000 E - [main] exiting with code 1
But if I disable the istio-injection to my app(pod) then I can successfully connect and use mongo as expected.
Is there a work around for this, I would like to have istio-proxy injected to my app/pod and use mongodb?
Injecting Databases with istio is complicated.
I would start with checking your mtls, if it´s STRICT, I would change it to permissive and check if it works. It´s well described here.
You see requests still succeed, except for those from the client that doesn’t have proxy, sleep.legacy, to the server with a proxy, httpbin.foo or httpbin.bar. This is expected because mutual TLS is now strictly required, but the workload without sidecar cannot comply.
Is there a work around for this, I would like to have istio-proxy injected to my app/pod and use mongodb?
If changing mtls won´t work, then in istio You can set up database without injecting and then add it to istio registry using ServiceEntry object so it would be able to communicate with the rest of istio services.
To add your mongodb database to istio you can use ServiceEntry.
ServiceEntry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes). In addition, the endpoints of a service entry can also be dynamically selected by using the workloadSelector field. These endpoints can be VM workloads declared using the WorkloadEntry object or Kubernetes pods. The ability to select both pods and VMs under a single service allows for migration of services from VMs to Kubernetes without having to change the existing DNS names associated with the services.
Example of ServiceEntry
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc-mongocluster
spec:
hosts:
- mymongodb.somedomain # not used
addresses:
- 192.192.192.192/24 # VIPs
ports:
- number: 27018
name: mongodb
protocol: MONGO
location: MESH_INTERNAL
resolution: STATIC
endpoints:
- address: 2.2.2.2
- address: 3.3.3.3
If You have mtls enabled You will also need DestinationRule that will define how to communicate with the external service.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: mtls-mongocluster
spec:
host: mymongodb.somedomain
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/myclientcert.pem
privateKey: /etc/certs/client_private_key.pem
caCertificates: /etc/certs/rootcacerts.pem
Additionally take a look at this documentation
https://istiobyexample.dev/databases/
https://istio.io/latest/blog/2018/egress-mongo/

How do I set the minio domain for pre-signed URLs?

I'm using minio in Kubernetes and it works great. However, I can't seem to to change the domain and protocol for a pre-signed URL. Minio keeps giving me http://minio.test.svc:9000/delivery/ where as I want https://example.com/delivery. I've tried setting MINIO_DOMIN in the pod but it seems to have not effect; I think I'm misusing this var anyway.
It all depends on how you create your Minio client instance. Specifying host and port as below will make Minio resolve your domain to IP address and use IP rather than the domain. Sample JavaScript code:
import { Client as MinioClient } from 'minio';
const client = new MinioClient(
endPoint: 'yourdomain.com',
port: 9000,
accessKey: process.env.MINIO_ACCESS_KEY,
secretKey: process.env.MINIO_SECRET_KEY,
useSSL: false
);
If you create your minio instance like above, your domain will be resolved to it's corresponding IP address, and thus minio will work with http://x.x.x.x:9000 as opposed to https://yourdomain.com
Also to note, if your client is configured as above, trying to use useSSL: true will throw SSL error as below
write EPROTO 140331355002752:error:1408F10B:SSL routines:ssl3_get_record:wrong
version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332
For minio to use your domain as https://yourdomain.com, you need to have a web server like nginx to proxy your requests to your minio server. Minio has documented how you can achieve this here. Add SSL to your domain as documented here then proceed to create your minio client as below:
import { Client as MinioClient } from 'minio';
const client = new MinioClient(
endPoint: 'yourdomain.com',
port: 443,
accessKey: process.env.MINIO_ACCESS_KEY,
secretKey: process.env.MINIO_SECRET_KEY,
useSSL: true
);
Note the change in port and useSSL parameters.
Minio will now use https://yourdomain.com in all cases. Signed urls will also be https.
I bashed my head on this problem for a couple of days and managed to resolve it with NGINX in my Kubernetes cluster.
NGINX controller Kubernetes: need to change Host header within ingress
You use the ingress annotations to change the Host header of all incoming traffic to your Minio ingress so that it will always be the same Host name.

Kubernetes Cloud SQL sidecar connection timed out. How to check if credentials work?

I'm trying to setup a Cloud SQL Proxy Docker image for PostgreSQL as mentioned here.
I can get my app to connect to the proxy docker image but the proxy times out. I suspect it's my credentials or the port, so how do I debug and find out if it works?
This is what I have on my project
kubectl create secret generic cloudsql-instance-credentials --from-file=credentials.json=my-account-credentials.json
My deploy spec snippet:
spec:
containers:
- name: mara ...
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<MY INSTANCE NAME>=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
The logs of my cloudsql-proxy show a timeout:
2019/05/13 15:08:25 using credential file for authentication; email=646092572393-compute#developer.gserviceaccount.com
2019/05/13 15:08:25 Listening on 127.0.0.1:5432 for <MY INSTANCE NAME>
2019/05/13 15:08:25 Ready for new connections
2019/05/13 15:10:48 New connection for "<MY INSTANCE NAME>"
2019/05/13 15:10:58 couldn't connect to <MY INSTANCE NAME>: dial tcp <MY PRIVATE IP>:3307: getsockopt: connection timed out
Questions:
I specify 5432 as my port, but as you can see in the logs above,it's hitting 3307. Is that normal and if not, how do I specify 5432?
How do I check if it is a problem with my credentials? My credentials file is from my service account 123-compute#developer.gserviceaccount.com
and the service account shown when I go to my cloud sql console is p123-<somenumber>#gcp-sa-cloud-sql.iam.gserviceaccount.com. They don't seem the same? Does that make a difference?
If I make the Cloud SQL instance available on a public IP, it works.
I specify 5432 as my port, but as you can see in the logs above,it's
hitting 3307
The proxy listens locally on the port you specified (in this case 5432), and connects to your Cloud SQL instance via port 3307. This is expected and normal.
How do I check if it is a problem with my credentials?
The proxy returns an authorization error if the Cloud SQL instance doesn't exist, or if the service account doesn't have access. The connection timeout error means it failed to reach the Cloud SQL instance.
My credentials file is from my service account 123-compute#developer.gserviceaccount.com and the service account shown when I go to my cloud sql console is p123-#gcp-sa-cloud-sql.iam.gserviceaccount.com. They don't seem the same?
One is just the name of the file, the other is the name of the service account itself. The name of the file doesn't have to match the name of the service account. You can check the name and IAM roles of a service account on the Service Account page.
2019/05/13 15:10:58 couldn't connect to : dial tcp :3307: getsockopt: connection timed out
This error means that the proxy failed to establish a network connection to the instance (usually because a path from the current location doesn't exist). There are two common causes for this:
First, make sure there isn't a firewall or something blocking outbound connections on port 3307.
Second, since you are using Private IP, you need to make sure the resource you are running the proxy on meets the networking requirements.
Proxy listen port 3307. This is mentioned on documentation
Port 3307 is used by the Cloud SQL Auth proxy to connect to the Cloud SQL Auth proxy server. -- https://cloud.google.com/sql/docs/postgres/connect-admin-proxy#troubleshooting
You may need to create a firewall like the following:
Direction: Egress
Action on match: Allow
Destination filters : IP ranges 0.0.0.0/0
Protocols and ports : tcp:3307 & tcp:5432

NGINX Ingress dealing hostAliases fail due to SSL error

I've configured a NGINX Ingress to use SSL. This works fine, but I'm trying to use hostAliases to route all requests from one domain back to my cluster and its failing with the following error:
Error: unable to verify the first certificate
My alias:
hostAliases:
- ip: "MY.CLUSTER.IP"
hostnames:
- "my.domain.com"
Is there a way to use this aliasing and still get ssl working?
According to this host alias is just a record in /etc/hosts file, which overwrites "usual" DNS resolution. If I understand your issue correctly, in this case you just need to have valid TLS certificate for "my.domain.com" installed on MY.CLUSTER.IP
Hope it helps.