Spring Boot Admin can't differ between multiple service instances in Cloudfoundry - spring-boot-admin

I got Spring Boot Admin running locally with Eureka Service Discovery (No SBA Dependeny in the Clients). Now i tried to deploy it in Cloudfoundry. According the Documentation, Version 2.0.1 should "support CloudFoundry out of the box".
My Problem is that when I scale a service up to multiple instances, they are all registered under the same hostname and port. Eureka shows me all Instances with their InstanceID that I configured like this:
eureka:
instance:
instanceId: ${spring.application.name}:${vcap.application.instance_id:${spring.application.instance_id:${random.value}}}
But Spring Boot Admin only lists one instance with hostname:port as identifier. I think i have to configure something on the client so that it sends the instance ID per HTTP Header when registering. But i don't know how.

Apparently you have to set the ApplicationId and InstanceIndex that Cloudfoundry generates as Eureka ApplicationId and InstanceId at Startup/ContextRefresh of your Client.
CloudFoundryApplicationInitializer.kt
#Component
#Profile("cloud")
#EnableConfigurationProperties(CloudFoundryApplicationProperties::class)
class CloudFoundryApplicationInitializer {
private val log = LoggerFactory.getLogger(CloudFoundryApplicationInitializer::class.java)
#Autowired
private val applicationInfoManager: ApplicationInfoManager? = null
#Autowired
private val cloudFoundryApplicationProperties: CloudFoundryApplicationProperties? = null
#EventListener
fun onRefreshScopeRefreshed(event: RefreshScopeRefreshedEvent) {
injectCfMetadata()
}
#PostConstruct
fun onPostConstruct() {
injectCfMetadata()
}
fun injectCfMetadata() {
if(this.cloudFoundryApplicationProperties == null) {
log.error("Cloudfoundry Properties not set")
return
}
if(this.applicationInfoManager == null) {
log.error("ApplicationInfoManager is null")
return
}
val map = applicationInfoManager.info.metadata
map.put("applicationId", this.cloudFoundryApplicationProperties.applicationId)
map.put("instanceId", this.cloudFoundryApplicationProperties.instanceIndex)
}
}
CloudFoundryApplicationProperties.kt
#ConfigurationProperties("vcap.application")
class CloudFoundryApplicationProperties {
var applicationId: String? = null
var instanceIndex: String? = null
var uris: List<String> = ArrayList()
}

Related

Kafka Connect using REST API with Strimzi with kind: KafkaConnector

I'm trying to use Kafka Connect REST API for managing connectors, for simplicity consider the following pause implementation:
def pause(): Unit = {
logger.info(s"pause() Triggered")
val response = HttpClient.newHttpClient.send({
HttpRequest
.newBuilder(URI.create(config.connectUrl + s"/connectors/${config.connectorName}/pause"))
.PUT(BodyPublishers.noBody)
.timeout(Duration.ofMillis(config.timeout.toMillis.toInt))
.build()
}, BodyHandlers.ofString)
if (response.statusCode() != HTTPStatus.Accepted) {
throw new Exception(s"Could not pause connector: ${response.body}")
}
}
Since I'm using KafkaConnector as a resource, I cannot use Kafka Connect REST API because the connector operator has the KafkaConnetor resources as its single source of truth, manual changes such as pause made directly using the Kafka Connect REST API are reverted by the Cluster Operator.
So to pause the connector I need to edit the resource in some way.
I'm struggling to change the logic of the current function, It will be great to have some practical examples of how to handle KafkaConnetor resources.
I check out the Using Strimzi doc but couldn't find any practical example
Thanks!
After help from #Jakub i managed to create my new client:
class KubernetesService(config: Configuration) extends StrictLogging {
private[this] val client = new DefaultKubernetesClient(Config.autoConfigure(config.connectorContext))
def setPause(pause: Boolean): Unit = {
logger.info(s"[KubernetesService] - setPause($pause) Triggered")
val connector = getConnector()
connector.getSpec.setPause(pause)
Crds.kafkaConnectorOperation(client).inNamespace(config.connectorNamespace).withName(config.connectorName).replace(connector)
Crds.kafkaConnectorOperation(client)
.inNamespace(config.connectorNamespace)
.withName(config.connectorName)
.waitUntilCondition(connector => {
connector != null &&
connector.getSpec.getPause == pause && {
val desiredState = if (pause) "Paused" else "Running"
connector.getStatus.getConditions.stream().anyMatch(_.getType.equalsIgnoreCase(desiredState))
}
}, config.timeout.toMillis, TimeUnit.MILLISECONDS)
}
def delete(): Unit = {
logger.info(s"[KubernetesService] - delete() Triggered")
Crds.kafkaConnectorOperation(client).inNamespace(config.connectorNamespace).withName(config.connectorName).delete
Crds.kafkaConnectorOperation(client)
.inNamespace(config.connectorNamespace)
.withName(config.connectorName)
.waitUntilCondition(_ == null, config.timeout.toMillis, TimeUnit.MILLISECONDS)
}
def create(oldKafkaConnect: KafkaConnector): Unit = {
logger.info(s"[KubernetesService] - create(${oldKafkaConnect.getMetadata}) Triggered")
Crds.kafkaConnectorOperation(client).inNamespace(config.connectorNamespace).withName(config.connectorName).create(oldKafkaConnect)
Crds.kafkaConnectorOperation(client)
.inNamespace(config.connectorNamespace)
.withName(config.connectorName)
.waitUntilCondition(connector => {
connector != null &&
connector.getStatus.getConditions.stream().anyMatch(_.getType.equalsIgnoreCase("Running"))
}, config.timeout.toMillis, TimeUnit.MILLISECONDS)
}
def getConnector(): KafkaConnector = {
logger.info(s"[KubernetesService] - getConnector() Triggered")
Try {
Crds.kafkaConnectorOperation(client).inNamespace(config.connectorNamespace).withName(config.connectorName).get
} match {
case Success(connector) => connector
case Failure(_: NullPointerException) => throw new NullPointerException(s"Failure on getConnector(${config.connectorName}) on ns: ${config.connectorNamespace}, context: ${config.connectorContext}")
case Failure(exception) => throw exception
}
}
}
To pause the connector, you can edit the KafkaConnector resource and set the pause field in .spec to true (see the docs). There are several options how you can do it. You can use kubectl and either apply the new YAML from file (kubectl apply) or do it interactively using kubectl edit.
If you want to do it programatically, you will need to use a Kubernetes client to edit the resource. In Java, you can also use the api module of Strimzi which has all the structures for editing the resources. I put together a simple example for pausing the Kafka connector in Java using the Fabric8 Kubernetes client and the api module:
package cz.scholz.strimzi.api.examples;
import io.fabric8.kubernetes.client.DefaultKubernetesClient;
import io.fabric8.kubernetes.client.KubernetesClient;
import io.fabric8.kubernetes.client.dsl.MixedOperation;
import io.fabric8.kubernetes.client.dsl.Resource;
import io.strimzi.api.kafka.Crds;
import io.strimzi.api.kafka.KafkaConnectorList;
import io.strimzi.api.kafka.model.KafkaConnector;
public class PauseConnector {
public static void main(String[] args) {
String namespace = "myproject";
String crName = "my-connector";
KubernetesClient client = new DefaultKubernetesClient();
MixedOperation<KafkaConnector, KafkaConnectorList, Resource<KafkaConnector>> op = Crds.kafkaConnectorOperation(client);
KafkaConnector connector = op.inNamespace(namespace).withName(crName).get();
connector.getSpec().setPause(true);
op.inNamespace(namespace).withName(crName).replace(connector);
client.close();
}
}
(See https://github.com/scholzj/strimzi-api-examples for the full project)
I'm not a Scala users - but I assume it should be usable from Scala as well, but I leave rewriting it from Java to Scala to you.

Ktor server app keep increasing open connections

Hi i recently deploy ktor server project on server as rest api backend for my app using. I'm using netty and running it on server with system service. Ktor server is running on port 7171 and whenever i check connections with port 7171 it keep increasing. I'm checking with this command
ss -ant | grep :7171 | wc -l
After one day connection numbers 20k+ and server crash nothing work.
I think some connections keep open. IN logs i don't get any error except few errors like connection reset by peer.
I'm also using HttpClient with Apache and for caching list of data I'm storing data in companion object so not fetching it every time from database.
I reviewed code and have only doubt about above two things.
These are my gradle dependencies
implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8:$kotlin_version")
implementation("io.ktor:ktor-server-netty:$ktor_version")
implementation("io.ktor:ktor-client-apache:$ktor_version")
implementation("io.ktor:ktor-client-logging-native:$ktor_version")
implementation("io.ktor:ktor-gson:$ktor_version")
implementation("ch.qos.logback:logback-classic:$logback_version")
implementation("io.ktor:ktor-metrics:$ktor_version")
implementation("io.ktor:ktor-server-core:$ktor_version")
implementation("io.ktor:ktor-server-sessions:$ktor_version")
implementation("io.ktor:ktor-auth-jwt:$ktor_version")
implementation("org.jooq:jooq")
jooqGeneratorRuntime("mysql:mysql-connector-java:8.0.19")
implementation("mysql:mysql-connector-java:8.0.19")
implementation(group = "com.zaxxer", name = "HikariCP", version = "3.4.2")
implementation("io.sentry:sentry:1.7.30")
implementation("software.amazon.awssdk:s3:2.8.7")
Currently i have about 7k users and max concurrent users are 450.
Please guide me how i can check issue and figure out problem.
Here is HttClient code:
suspend fun post(
url: String,
params: Map<String, String> = emptyMap(),
headersMap: Map<String, String> = emptyMap()
): Result<String> {
val httpClient = getHttpClient()
return kotlin.runCatching {
httpClient.post<String>(url) {
body = MultiPartFormDataContent(
formData {
params.forEach {
append(it.key, it.value)
}
}
)
if (headersMap.isNotEmpty()) {
headersMap.forEach { (key, value) ->
header(key, value)
}
}
}.also {
httpClient.close()
}
}.onFailure { httpClient.close() }
}
private fun getHttpClient(): HttpClient {
return HttpClient(Apache) {
install(HttpTimeout) {
requestTimeoutMillis = 60000
}
engine {
customizeClient {
sslContext = SSLContextBuilder.create().loadTrustMaterial(object : TrustStrategy {
override fun isTrusted(chain: Array<out X509Certificate>?, authType: String?): Boolean {
return true
}
}).build()
setSSLHostnameVerifier(NoopHostnameVerifier())
}
}
}
}
Also please check my api response header i think keepalive should have some expiry?

Play evolution not applied in custom Slick environment configuration

DESCRIPTION:
Hi. I am using Play framework and Slick and PostgreSQL for my application. So I design CI_Pipelines and configure them in my application.conf.When we set slick configuration like this:
play.evolutions.db.default {
enabled = true
autoApply=true
}
slick.dbs.default {
driver="slick.driver.PostgresDriver$"
db {
driver=org.postgresql.Driver
dbName=dbName
url="jdbc:postgresql://127.0.0.1/dbName"
user=***
password=***
}
}
and in codes (dao files):
#Singleton
class UserDao #Inject()(
protected val dbConfigProvider: DatabaseConfigProvider
)(implicit val ex: ExecutionContext) extends HasDatabaseConfigProvider[JdbcProfile] {
import driver.api._
val userTableQuery = TableQuery[UserTable]
everything works all write such as EVOLUTION that play provided for us.
But if you want to setup other environments such as staging or production you will fail :D.
I read this documentation of Slick you can read it from here that is perfect for writing a successful config file. so I write it like this:
com.my.org {
env = "development"
env = ${?MY_ENV}
development {
db {
dataSourceClass = "slick.jdbc.DatabaseUrlDataSource"
properties = {
driver = "slick.driver.PostgresDriver$"
user = "myuser"
password = "*****"
url = "jdbc:postgresql://myIP/dbName"
}
numThreads = 10
}
}
staging {
db {
ip=186.14.*.*
...
}
}
production {
db {
ip=196.82.*.*
...
}
}
}
** The important thing that you must attention to it, is my PostgreSQL is outside of my (docker container) so I must connect to it remotely.
and in code we have :
class UserDao #Inject()(
)(implicit val ex: ExecutionContext) {
import driver.api._
val db = Database.forConfig(s"$prefix.db")
val userTableQuery = TableQuery[UserTable]
PROBLEM:
Problem is now play evolution does not applied.
QUESTION:
I need to know how to implement one of this (to solve my problems):
how to apply play evolution in this way described before (in problem part) ?
how to setup my environments in better way ?
A friend of mine consulted me about the problem [over the phone] and here is the solution we came up with:
slick.dbs.default.driver = "slick.driver.PostgresDriver$"
slick.dbs.default.db {
driver = org.postgresql.Driver
ip = localhost
dbName = ***
user = ***
password = "***"
url="jdbc:postgresql://postgresql/"${slick.dbs.default.db.dbName}
}
You can also use Docker to create a docker network and set your PostgreSQL container name instead of your IP address.
Also, if you want to be be able to configure the IP address, say from jenkins or Play_Runtime_Guice, you can use this:
url="jdbc:postgresql://"${?POSTGRESQL_IP}"/dbName"

Configure mongodb property maxWaitQueueSize in Spring boot application?

I get the error com.mongodb.MongoWaitQueueFullException: Too many threads are already waiting for a connection. Max number of threads (maxWaitQueueSize) of 500 has been exceeded. while doing a stress test on my application.
So I am thinking of configuring the maxWaitQueueSize property via configuration.
I am using spring boot to configure mongodb connection. I am using #EnableAutoConfiguration in my Application and I have declared only spring.data.mongodb.uri=mongodb://user:password#ip:27017 in the application.properties file.
How do I configure the maxWaitQueueSize property with spring boot?
How do I decide a good value for the maxWaitQueueSize?
If you're using MongoDB 3.0+, you can set waitQueueMultiple in your mongouri :
spring.data.mongodb.uri=mongodb://user:password#ip:27017/?waitQueueMultiple=10
waitQueueMultiple is a number that the driver multiples the maxPoolSize value to, to provide the maximum number of threads allowed to wait for a connection to become available from the pool.
How do I decide a good value for the maxWaitQueueSize?
It's not directly related to MongoDB but you can read more about Pool Sizing in Hikari github wiki.
In com.mongodb.MongoClientURI, you can find the parameters which can be used in MongoClientOption.
if (key.equals("maxpoolsize")) {
builder.connectionsPerHost(Integer.parseInt(value));
} else if (key.equals("minpoolsize")) {
builder.minConnectionsPerHost(Integer.parseInt(value));
} else if (key.equals("maxidletimems")) {
builder.maxConnectionIdleTime(Integer.parseInt(value));
} else if (key.equals("maxlifetimems")) {
builder.maxConnectionLifeTime(Integer.parseInt(value));
} else if (key.equals("waitqueuemultiple")) {
builder.threadsAllowedToBlockForConnectionMultiplier(Integer.parseInt(value));
} else if (key.equals("waitqueuetimeoutms")) {
builder.maxWaitTime(Integer.parseInt(value));
} else if (key.equals("connecttimeoutms")) {
builder.connectTimeout(Integer.parseInt(value));
} else if (key.equals("sockettimeoutms")) {
builder.socketTimeout(Integer.parseInt(value));
} else if (key.equals("autoconnectretry")) {
builder.autoConnectRetry(_parseBoolean(value));
} else if (key.equals("replicaset")) {
builder.requiredReplicaSetName(value);
} else if (key.equals("ssl")) {
if (_parseBoolean(value)) {
builder.socketFactory(SSLSocketFactory.getDefault());
}
}
I am using spring boot starter webflux. This issue also happens.
I tried to add MongoClientFactoryBean. It doesn't work.
The whole application is located in https://github.com/yigubigu/webfluxbenchmark. I tried to test performance benchmark of webflux and original mvc.
#Bean
public MongoClientFactoryBean mongoClientFactoryBean() {
MongoClientFactoryBean factoryBean = new MongoClientFactoryBean();
factoryBean.setHost("localhost");
factoryBean.setPort(27017);
factoryBean.setSingleton(true);
MongoClientOptions options = MongoClientOptions.builder()
.connectionsPerHost(1000)
.minConnectionsPerHost(500)
.threadsAllowedToBlockForConnectionMultiplier(10)
.build();
factoryBean.setMongoClientOptions(options);
return factoryBean;
}
you can achieve this by injecting an object of MongoOptions to your MongoTemplate.
This maxQueueSize limit is computed here in the Java client source code :
https://github.com/mongodb/mongo-java-driver/blob/3.10.x/driver-core/src/main/com/mongodb/connection/ConnectionPoolSettings.java#L273
It is the product of maxConnectionPoolSize and threadsAllowedToBlockForConnectionMultiplier and hence can be modified through ?maxPoolSize= and ?waitQueueMultiple= in the connection URI.

ServiceStack OrmLite and PostgreSQL - timeouts

I am updating large amounts of data using ServiceStack's OrmLite with a connection to PostgreSQL, however, I am getting a large amount of timeouts.
Sample Code:
public class AccountService : Service
{
public object Any(ImportAccounts request)
{
var sourceAccountService = this.ResolveService<SourceAccountService();
var sourceAccounts = (GetSourceAccountResponse)sourceAccountService.Get(new GetSourceAccounts());
foreach (var a in sourceAccounts)
{
Db.Save(a.ConvertTo<Account>());
}
}
}
The Source Account service, which sits in the same project & accesses the same Db.
public class SourceAccountService : Service
{
public object Get(GetSourceAccounts request)
{
return new GetSourceAccountsResponse { Result = Db.Select<SourceAccounts>().ToList() };
}
}
Questions,
Should I be expecting large amount of timeouts considering the above set up?
is it better to be using using (IDbConnection db = DbFactory.OpenDbConnection()) instead of Db?
If you're resolving and executing a Service you should do it in a using statement so its open Db connection and
other resources are properly disposed of:
using (var service = this.ResolveService<SourceAccountService())
{
var sourceAccounts = service.Get(new GetSourceAccounts());
foreach (var a in sourceAccounts)
{
Db.Save(a.ConvertTo<Account>());
}
}
If you're executing other Services it's better to specify the Return type on the Service for added type safety
and reduced boiler plate at each call site, e.g:
public class SourceAccountService : Service
{
public GetSourceAccountsResponse Get(GetSourceAccounts request)
{
return new GetSourceAccountsResponse {
Result = Db.Select<SourceAccounts>()
};
}
}
Note: Db.Select<T> returns a List so .ToList() is unnecessary,
Another alternative for executing a Service instead of ResolveService<T> is to use:
var sourceAccounts = (GetSourceAccountsResponse)base.ExecuteRequest(new GetSourceAccounts());
Which is same above and executes the Service within a using {}.