Network not found error while changing Nodeport range in openshift origin using OC - deployment

I am trying to change Nodeport range in openshift origin with below OC command
oc patch network.config.openshift.io cluster --type=merge -p '{ "spec": { "serviceNodePortRange": "30000-" } }'
I got the error like network.config.openshift.io Not Found am I missing any prerequisites.
Please help me to resolve this.
Thanks in advance.

I think you missed the upper bound of the port range.
Try to run this:
oc patch network.config.openshift.io cluster \
--type=merge \
-p '{ "spec": { "serviceNodePortRange": "30000-33333" } }'
You should check the Configuring the node port service range page.

Related

mongoDB whitelist IP

I am seeing similar posts, however none are helping me solve my problem.
Following a Udemy tutorial that builds a MERN application from scratch, I got stuck on the mongoose connection.
Here is my index.js code:
const express = require("express");
const mongoose = require("mongoose");
const app = express();
app.use(express.json());
app.listen(5000, () => console.log("Server started on port 5000"));
app.use("/snippet", require("./routers/snippetRouter"));
mongoose.connect("mongodb+srv://snippetUser:_password_#
snippet-manager.sometext.mongodb.net/main?retryWrites=
true&w=majority", {
useNewUrlParser: true,
useUnifiedTopology: true
}, (err) => {
if (err) return console.log("error here " + err);
console.log("Connected to MongoDB");
});
Here is the error I am getting:
Server started on port 5000
error here MongooseServerSelectionError: Could not connect to any
servers in your MongoDB Atlas cluster. One common reason is
that you're trying to access the database from an IP that isn't
whitelisted. Make sure your current IP address is on your Atlas
cluster's IP whitelist:
https://docs.atlas.mongodb.com/security-whitelist/
As stated, I am seeing similar errors relating to an IP that isn't whitelisted.
However, in my mongoDB account, it seems that my IP is already whitelisted:
In the screenshot above, the blank part is where my IP is located (right before it says "includes your current IP address").
Since my IP is listed there, does that not mean my IP is whitelisted?
If not, how do I whitelist my IP?
After a couple of days of frustration, I went into Mongo Atlas, then into Network Access and changed the setting to "allow access from anywhere". It removed my IP address and changed it to a universal IP address.
This was a deviation from the tutorial I am following on Udemy, but it did work, and I can finally proceed with the rest of the course.
here is an answer i left elsewhere. hope it helps someone who comes across this:
this script will be kept up to date on my gist
why
mongo atlas provides a reasonably priced access to a managed mongo DB. CSPs where containers are hosted charge too much for their managed mongo DB. they all suggest setting an insecure CIDR (0.0.0.0/0) to allow the container to access the cluster. this is obviously ridiculous.
this entrypoint script is surgical to maintain least privileged access. only the current hosted IP address of the service is whitelisted.
usage
set as the entrypoint for the Dockerfile
run in cloud init / VM startup if not using a container (and delete the last line exec "$#" since that is just for containers
behavior
uses the mongo atlas project IP access list endpoints
will detect the hosted IP address of the container and whitelist it with the cluster using the mongo atlas API
if the service has no whitelist entry it is created
if the service has an existing whitelist entry that matches current IP no change
if the service IP has changed the old entry is deleted and new one is created
when a whitelist entry is created the service sleeps for 60s to wait for atlas to propagate access to the cluster
env
setup
create API key for org
add API key to project
copy the public key (MONGO_ATLAS_API_PK) and secret key (MONGO_ATLAS_API_SK)
go to project settings page and copy the project ID (MONGO_ATLAS_API_PROJECT_ID)
provide the following values in the env of the container service
SERVICE_NAME: unique name used for creating / updating (deleting old) whitelist entry
MONGO_ATLAS_API_PK: step 3
MONGO_ATLAS_API_SK: step 3
MONGO_ATLAS_API_PROJECT_ID: step 4
deps
bash
curl
jq CLI JSON parser
# alpine / apk
apk update \
&& apk add --no-cache \
bash \
curl \
jq
# ubuntu / apt
export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get -y install \
bash \
curl \
jq
script
#!/usr/bin/env bash
# -- ENV -- #
# these must be available to the container service at runtime
#
# SERVICE_NAME
#
# MONGO_ATLAS_API_PK
# MONGO_ATLAS_API_SK
# MONGO_ATLAS_API_PROJECT_ID
#
# -- ENV -- #
set -e
mongo_api_base_url='https://cloud.mongodb.com/api/atlas/v1.0'
check_for_deps() {
deps=(
bash
curl
jq
)
for dep in "${deps[#]}"; do
if [ ! "$(command -v $dep)" ]
then
echo "dependency [$dep] not found. exiting"
exit 1
fi
done
}
make_mongo_api_request() {
local request_method="$1"
local request_url="$2"
local data="$3"
curl -s \
--user "$MONGO_ATLAS_API_PK:$MONGO_ATLAS_API_SK" --digest \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--request "$request_method" "$request_url" \
--data "$data"
}
get_access_list_endpoint() {
echo -n "$mongo_api_base_url/groups/$MONGO_ATLAS_API_PROJECT_ID/accessList"
}
get_service_ip() {
echo -n "$(curl https://ipinfo.io/ip -s)"
}
get_previous_service_ip() {
local access_list_endpoint=`get_access_list_endpoint`
local previous_ip=`make_mongo_api_request 'GET' "$access_list_endpoint" \
| jq --arg SERVICE_NAME "$SERVICE_NAME" -r \
'.results[]? as $results | $results.comment | if test("\\[\($SERVICE_NAME)\\]") then $results.ipAddress else empty end'`
echo "$previous_ip"
}
whitelist_service_ip() {
local current_service_ip="$1"
local comment="Hosted IP of [$SERVICE_NAME] [set#$(date +%s)]"
if (( "${#comment}" > 80 )); then
echo "comment field value will be above 80 char limit: \"$comment\""
echo "comment would be too long due to length of service name [$SERVICE_NAME] [${#SERVICE_NAME}]"
echo "change comment format or service name then retry. exiting to avoid mongo API failure"
exit 1
fi
echo "whitelisting service IP [$current_service_ip] with comment value: \"$comment\""
response=`make_mongo_api_request \
'POST' \
"$(get_access_list_endpoint)?pretty=true" \
"[
{
\"comment\" : \"$comment\",
\"ipAddress\": \"$current_service_ip\"
}
]" \
| jq -r 'if .error then . else empty end'`
if [[ -n "$response" ]];
then
echo 'API error whitelisting service'
echo "$response"
exit 1
else
echo "whitelist request successful"
echo "waiting 60s for whitelist to propagate to cluster"
sleep 60s
fi
}
delete_previous_service_ip() {
local previous_service_ip="$1"
echo "deleting previous service IP address of [$SERVICE_NAME]"
make_mongo_api_request \
'DELETE' \
"$(get_access_list_endpoint)/$previous_service_ip"
}
set_mongo_whitelist_for_service_ip() {
local current_service_ip=`get_service_ip`
local previous_service_ip=`get_previous_service_ip`
if [[ -z "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] has not yet been whitelisted"
whitelist_service_ip "$current_service_ip"
elif [[ "$current_service_ip" == "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] IP has not changed"
else
echo "service [$SERVICE_NAME] IP has changed from [$previous_service_ip] to [$current_service_ip]"
delete_previous_service_ip "$previous_service_ip"
whitelist_service_ip "$current_service_ip"
fi
}
check_for_deps
set_mongo_whitelist_for_service_ip
# run CMD
exec "$#"
Make sure your cluster hasn't been accidentally put on pause if you're using free MongoDB Atlas
remove your current IP address and add it again
Go to your account of MongoDB Atlas
After Login go to the below URL
https://cloud.mongodb.com/v2/your_cluster_id#/security/network/accessList
Then add the IP in the IP Access List Tab
Then Click + Add IP ADDRESS
So you can access the DB from that particular IP.
==================== OR =============================
Go to Network Access
Then add the IP in the IP Access List Tab
Then Click + Add IP ADDRESS
So you can access the DB from that particular IP.
you should enter your cluster password in connection link--
"mongodb+srv://snippetUser:password#
snippet-manager.sometext.mongodb.net/main?retryWrites=true&w=majority"
enter your cluster password by removing password field

Validate Cluster - api/v1/nodes: http: server gave HTTP response to HTTPS client

On my ubuntu 18.04 aws server I try to create cluster via kops.
kops create cluster \
--name=asdf.com \
--state=s3://asdf \
--zones=eu-west-1a \
--node-count=1 \
--node-size=t2.micro \
--master-size=t2.micro \
--master-count=1 \
--dns-zone=asdf.com \
--ssh-public-key=~/.ssh/id_rsa.pub
kops update cluster --name asdf.com
Succesfully Updated my cluster.
But when i try to validate and try to
kubectl get nodes
I got the error : Server gave http response to https server
kops validate cluster --name asdf.com
Validation failed: unexpected error during validation: error listing nodes: Get https://api.asdf.com/api/v1/nodes: http: server gave HTTP response to HTTPS client
Error
I could’nt solve this.
I tried
kubectl config set-cluster asdf.com --insecure-skip-tls-verify=true
but it didnt work.
Please can you help?
t2.micro instances may be too small for a control plane nodes. They will certainly be very slow in booting properly. You can try omitting that flag (i.e use the default size) and see if that boots up properly.
Tip: use kops validate cluster --wait=30m as it may provide more clues to what is wrong.
Except for the instance size, the command above looks good. But if you want to dif deeper, you can have a look at https://kops.sigs.k8s.io/operations/troubleshoot/

How to configure MongoDB official source connector for Kafka Connect running on a kubernetes cluster

My Kafka cluster runs on kubernetes and I am using a custom image to run Kafka Connect with required mongoDB official source and sink connectors.
My mongoDB instance also runs on kubernetes. My issue is, I am unable to connect my live DB with Kafka Connect.
My connector config currently looks like this,
curl -X PUT \
-H "Content-Type: application/json" \
--data '{
"connector.class":"com.mongodb.kafka.connect.MongoSourceConnector",
"tasks.max": "1",
"connection.uri": "mongodb://192.168.190.132:27017,192.168.190.137:27017",
"database": "tractor",
"collection": "job",
"topic.prefix": "testing-mongo"
}' \
http://10.108.202.171:8083/connectors/mongo_source_job/config
Thanks for your reply. The issue was the stemming from TLS. I modified my config as follows,
"connection.uri": "mongodb://192.168.190.132:27017,192.168.190.137:27017/?tlsInsecure=true"
Its working now!
Can you try connecting to MongoDB service using the service name?
kubectl get service -n <namespace>
Use the above to get the services in the namespace of MongoDB and use the service name instead of the Ip's you have and see if that works?

Troubleshooting Kubernetes tutorial fine parallel

I am attempting to work through the following tutorial https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/ . My problem happens at the very first step, trying to start up redis. When I run
kubectl run -i --tty temp --image redis --command "/bin/sh"
I create a new pod, however running
redis-cli -h redis
returns an error: Could not connect to Redis at redis:6379: Name or service not known
It looks like you don't have Kube DNS setup correctly and what you got it's just a simple problem with name resolution.
If you look again at the tutorial, they even mention you can encounter such problem:
Note: if you do not have Kube DNS setup correctly, you may need to
change the first step of the above block to redis-cli -h
$REDIS_SERVICE_HOST.
So just instead of using redis-cli -h redis use redis-cli -h $REDIS_SERVICE_HOST and everything should work.

OpenShift Origin 3: "error: router could not be created"

I am trying to create a router as described on https://docs.openshift.com/enterprise/3.0/install_config/install/deploy_router.html#haproxy-router
However, when I run:
oadm router router --replicas=1 \
--credentials='/etc/openshift/master/openshift-router.kubeconfig' \
--service-account=router
I get the following error:
[root#openshift ~]# oadm router router --replicas=1 \
> --credentials='/etc/openshift/master/openshift-router.kubeconfig' \
> --service-account=router
error: router could not be created; the provided credentials "/etc/openshift/master/openshift-router.kubeconfig" could not be loaded: stat /etc/openshift/master/openshift-router.kubeconfig: no such file or directory
[root#openshift ~]#
Does anyone know what the problem is, and how I solve this?
Thanks
You are using OpenShift Origin 1.1 which isn't exactly the same as OpenShift 3.1. The openshift-router.kubeconfig is in /etc/origin/master/...
Try to use the documentation for origin (1.1).