Zuul not routing to student-service which is registered in Eureka Server.
using Greenwich.SR1
bootstrap.yml
server:
port: 17005
# Eureka server details and its refresh time
eureka:
instance:
leaseRenewalIntervalInSeconds: 1
leaseExpirationDurationInSeconds: 2
client:
registry-fetch-interval-seconds: 30
serviceUrl:
defaultZone: http://localhost:8761/eureka/
healthcheck:
enabled: true
lease:
duration: 5
instance:
lease-expiration-duration-in-seconds: 5
lease-renewal-interval-in-seconds: 30
# Current service name to be used by the eureka server
spring:
application:
name: app-gateway
# Microservices routing configuration
zuul:
routes:
students:
path: /students/**
serviceId: student-service
host:
socket-timeout-millis: 30000
hystrix:
command:
default:
execution:
isolation:
thread:
timeoutInMilliseconds: 30000
I've added PreFilter to log the request from UI. Whenever the request from UI hits the Zuul - I observe the below in logs but no proceeding after - Not getting routed to the student-service.
Request Method : GET Request URL : http://localhost:17005/students/School2
2019-04-26 21:45:54.314 INFO 18196 --- [o-17005-exec-10] c.netflix.config.ChainedDynamicProperty : Flipping property: student-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2019-04-26 21:45:54.387 INFO 18196 --- [o-17005-exec-10] c.n.u.concurrent.ShutdownEnabledTimer : Shutdown hook installed for: NFLoadBalancer-PingTimer-student-service
2019-04-26 21:45:54.387 INFO 18196 --- [o-17005-exec-10] c.netflix.loadbalancer.BaseLoadBalancer : Client: student-service instantiated a LoadBalancer: DynamicServerListLoadBalancer:{NFLoadBalancer:name=student-service,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:null
2019-04-26 21:45:54.682 INFO 18196 --- [o-17005-exec-10] c.n.l.DynamicServerListLoadBalancer : Using serverListUpdater PollingServerListUpdater
2019-04-26 21:45:54.720 INFO 18196 --- [o-17005-exec-10] c.netflix.config.ChainedDynamicProperty : Flipping property: student-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2019-04-26 21:45:54.723 INFO 18196 --- [o-17005-exec-10] c.n.l.DynamicServerListLoadBalancer : DynamicServerListLoadBalancer for client student-service initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=student-service,current list of Servers=[192.168.56.1:56567],Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone; Instance count:1; Active connections count: 0; Circuit breaker tripped count: 0; Active connections per server: 0.0;]
},Server stats: [[Server:192.168.56.1:56567; Zone:defaultZone; Total Requests:0; Successive connection failure:0; Total blackout seconds:0; Last connection made:Thu Jan 01 05:30:00 IST 1970; First connection made: Thu Jan 01 05:30:00 IST 1970; Active Connections:0; total failure count in last (1000) msecs:0; average resp time:0.0; 90 percentile resp time:0.0; 95 percentile resp time:0.0; min resp time:0.0; max resp time:0.0; stddev resp time:0.0]
]}ServerList:org.springframework.cloud.netflix.ribbon.eureka.DomainExtractingServerList#42abe3b4
2019-04-26 21:45:55.742 INFO 18196 --- [erListUpdater-0] c.netflix.config.ChainedDynamicProperty : Flipping property: student-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
There is no exception trace in the Microservices.
The response to UI:
type=Not Found, status=404
Please help on establishing this routing.
Related
everyone.
I have been learning istio and to understand how maxRequestsPerConnection works, I applied the manifest below.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin
trafficPolicy:
connectionPool:
http:
maxRequestsPerConnection: 1
httpbin is a sample service of istio.
I thought maxRequestsPerConnection means how many http requests are allowed per one TCP Connection, and istio would close tcp connection after pod received one http request in this case.
After applying, I sent some http requests using telnet. I thought istio would accept the request once and then close the TCP connection, but istio didn't.
$ telnet httpbin 8000
Trying 10.76.12.133...
Connected to httpbin.default.svc.cluster.local.
Escape character is '^]'.
GET /get HTTP/1.1
User-Agent: Telnet [ja] (Linux)
Host: httpbin
HTTP/1.1 200 OK
server: envoy
date: Sun, 07 Nov 2021 14:14:16 GMT
content-type: application/json
content-length: 579
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 9
{
"args": {},
"headers": {
"Host": "httpbin",
"User-Agent": "Telnet [ja] (Linux)",
"X-B3-Parentspanid": "b042ad708e2a47a2",
"X-B3-Sampled": "1",
"X-B3-Spanid": "b6a08d45e1a1e15e",
"X-B3-Traceid": "fc23863eafb0322db042ad708e2a47a2",
"X-Envoy-Attempt-Count": "1",
"X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=d9bb27f31fe44200f803dbe736419b4664b5b81045bb3811711119ca5ccf6a37;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
},
"origin": "127.0.0.6",
"url": "http://httpbin/get"
}
GET /get HTTP/1.1
User-Agent: Telnet [ja] (Linux)
Host: httpbin
HTTP/1.1 200 OK
server: envoy
date: Sun, 07 Nov 2021 14:14:18 GMT
content-type: application/json
content-length: 579
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 3
{
"args": {},
"headers": {
"Host": "httpbin",
"User-Agent": "Telnet [ja] (Linux)",
"X-B3-Parentspanid": "85722c0d777e8537",
"X-B3-Sampled": "1",
"X-B3-Spanid": "31d2acc5348a6fc5",
"X-B3-Traceid": "d7ada94a092d681885722c0d777e8537",
"X-Envoy-Attempt-Count": "1",
"X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=d9bb27f31fe44200f803dbe736419b4664b5b81045bb3811711119ca5ccf6a37;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
},
"origin": "127.0.0.6",
"url": "http://httpbin/get"
}
After this, I sent http request ten times using fortio, and I got the same result.
$ kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -c 1 -qps 0 -n 10 -loglevel Warning http://httpbin:8000/get
14:22:56 I logger.go:127> Log level is now 3 Warning (was 2 Info)
Fortio 1.17.1 running at 0 queries per second, 2->2 procs, for 10 calls: http://httpbin:8000/get
Starting at max qps with 1 thread(s) [gomax 2] for exactly 10 calls (10 per thread + 0)
Ended after 106.50891ms : 10 calls. qps=93.889
Aggregated Function Time : count 10 avg 0.010648204 +/- 0.01639 min 0.003757335 max 0.059256801 sum 0.106482036
# range, mid point, percentile, count
>= 0.00375734 <= 0.004 , 0.00387867 , 30.00, 3
> 0.004 <= 0.005 , 0.0045 , 70.00, 4
> 0.005 <= 0.006 , 0.0055 , 80.00, 1
> 0.012 <= 0.014 , 0.013 , 90.00, 1
> 0.05 <= 0.0592568 , 0.0546284 , 100.00, 1
# target 50% 0.0045
# target 75% 0.0055
# target 90% 0.014
# target 99% 0.0583311
# target 99.9% 0.0591642
Sockets used: 1 (for perfect keepalive, would be 1)
Jitter: false
Code 200 : 10 (100.0 %)
Response Header Sizes : count 10 avg 230.1 +/- 0.3 min 230 max 231 sum 2301
Response Body/Total Sizes : count 10 avg 824.1 +/- 0.3 min 824 max 825 sum 8241
All done 10 calls (plus 0 warmup) 10.648 ms avg, 93.9 qps
$
In my understanding, the message Sockets used: 1 (for perfect keepalive, would be 1) means fortio used only one TCP connection.
I guessed clients used different tcp connection for each http requests first, but if it is true, telnet connection was not closed by foreign host and fortio used ten tcp connections.
Please teach me what the function of maxRequestsPerConnection is.
I am trying to create vpn connection in my app. On the sever side use IKEv2 VPN Server with StrongSwan on Ubuntu 16.04. Build by this guid (https://www.digitalocean.com/community/tutorials/how-to-set-up-an-ikev2-vpn-server-with-strongswan-on-ubuntu-16-04).
When I'm trying to connect.
Server send this logs:
- May 5 08:58:21 ip-2 charon: 05[NET] received packet: from 3[500] to 2[500] (432 bytes)
- May 5 08:58:21 ip-2 charon: 05[ENC] parsed IKE_SA_INIT request 0 [ SA KE No N(REDIR_SUP) N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) ]
- May 5 08:58:21 ip-2 charon: 05[IKE] 3 is initiating an IKE_SA
- May 5 08:58:21 ip-2 charon: 05[IKE] local host is behind NAT, sending keep alives
- May 5 08:58:21 ip-2 charon: 05[IKE] remote host is behind NAT
- May 5 08:58:21 ip-2 charon: 05[IKE] received proposals inacceptable
- May 5 08:58:21 ip-2 charon: 05[ENC] generating IKE_SA_INIT response 0 [ N(NO_PROP) ]
- May 5 08:58:21 ip-2 charon: 05[NET] sending packet: from 2[500] to 3[500] (36 bytes)
- May 5 08:58:22 ip-2 charon: 16[NET] received packet: from 3[500] to 2[500] (432 bytes)
- May 5 08:58:22 ip-2 charon: 16[ENC] parsed IKE_SA_INIT request 0 [ SA KE No N(REDIR_SUP) N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) ]
- May 5 08:58:22 ip-2 charon: 16[IKE] 3 is initiating an IKE_SA
- May 5 08:58:22 ip-2 charon: 16[IKE] local host is behind NAT, sending keep alives
- May 5 08:58:22 ip-2 charon: 16[IKE] remote host is behind NAT
- May 5 08:58:22 ip-2 charon: 16[IKE] received proposals inacceptable
- May 5 08:58:22 ip-2 charon: 16[ENC] generating IKE_SA_INIT response 0 [ N(NO_PROP) ]
- May 5 08:58:22 ip-2 charon: 16[NET] sending packet: from 2[500] to 3[500] (36 bytes)
I use this configuration on server:
config setup
charondebug="ike 1, knl 1, cfg 0"
uniqueids=no
conn ikev2-vpn
auto=add
compress=no
type=tunnel
keyexchange=ikev2
fragmentation=yes
forceencaps=yes
lifetime=8h
dpdaction=clear
dpddelay=300s
rekey=no
left=%any
leftid=<IP>
leftcert=server-cert.pem
leftsendcert=always
leftsubnet=0.0.0.0/0
right=%any
rightid=%any
rightauth=eap-mschapv2
rightsourceip=10.10.10.0/24
rightdns=8.8.8.8,8.8.4.4
rightsendcert=never
eap_identity=%identity
ike=aes256-sha1-modp1024,3des-sha1-modp1024!
esp=aes256-sha1,3des-sha1!
On iOS use this code:
class VpnManager {
let vpnManager = NEVPNManager.shared()
let info = VPNINFO()
func connectToVPN() {
vpnManager.loadFromPreferences { error in
guard error == nil else {
print(error)
return
}
let IKEv2Protocol = NEVPNProtocolIKEv2()
IKEv2Protocol.serverAddress = self.info.serverAddress
IKEv2Protocol.authenticationMethod = .certificate
let certificate = SecCertificateCreateWithData(nil, Data(base64Encoded: self.info.cert)! as CFData)!
let certificateData = SecCertificateCopyData(certificate) as Data
IKEv2Protocol.identityData = certificateData
self.vpnManager.protocolConfiguration = IKEv2Protocol
self.vpnManager.isEnabled = true
self.vpnManager.saveToPreferences { error in
guard error == nil else {
print(error)
return
}
do {
try self.vpnManager.connection.startVPNTunnel(
options: ([
NEVPNConnectionStartOptionUsername: "username",
NEVPNConnectionStartOptionPassword: KeychainWrapper.passwordRefForVPNID("MY_PASSWORD")
] as! [String: NSObject]))
} catch let error {
print(error)
}
}
}
}
}
Expected result:
Connected
Actual result:
Connection -> Disconnected
Last console logs:
Jun 4 15:44:51 charon: 06[NET] received packet: from <my ip>[500] to <server ip>[500] (304 bytes)
Jun 4 15:44:51 charon: 06[ENC] parsed IKE_SA_INIT request 0 [ SA KE No N(REDIR_SUP) N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) ]
Jun 4 15:44:51 charon: 06[IKE] <my ip> is initiating an IKE_SA
Jun 4 15:44:51 charon: 06[CFG] selected proposal: IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024
Jun 4 15:44:51 charon: 06[IKE] local host is behind NAT, sending keep alives
Jun 4 15:44:51 charon: 06[IKE] remote host is behind NAT
Jun 4 15:44:51 charon: 06[ENC] generating IKE_SA_INIT response 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) N(CHDLESS_SUP) N(MULT_AUTH) ]
Jun 4 15:44:51 charon: 06[NET] sending packet: from <server ip>[500] to <my ip>[500] (328 bytes)
Jun 4 15:44:51 charon: 05[NET] received packet: from <my ip>[500] to <server ip>[500] (304 bytes)
Jun 4 15:44:51 charon: 05[ENC] parsed IKE_SA_INIT request 0 [ SA KE No N(REDIR_SUP) N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) ]
Jun 4 15:44:51 charon: 05[IKE] <my ip> is initiating an IKE_SA
Jun 4 15:44:51 charon: 05[CFG] selected proposal: IKE:AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024
Jun 4 15:44:51 charon: 05[IKE] local host is behind NAT, sending keep alives
Jun 4 15:44:51 charon: 05[IKE] remote host is behind NAT
Jun 4 15:44:51 charon: 05[ENC] generating IKE_SA_INIT response 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(FRAG_SUP) N(CHDLESS_SUP) N(MULT_AUTH) ]
Jun 4 15:44:51 charon: 05[NET] sending packet: from <server ip>[500] to <my ip>[500] (328 bytes)
Jun 4 15:45:11 charon: 08[IKE] sending keep alive to <my ip>[500]
Jun 4 15:45:11 charon: 09[IKE] sending keep alive to <my ip>[500]
Jun 4 15:45:21 charon: 10[JOB] deleting half open IKE_SA with <my ip> after timeout
Jun 4 15:45:21 charon: 11[JOB] deleting half open IKE_SA with <my ip> after timeout
Your strongswan server is configured with the following encryption algorithm.
ike=aes256-sha1-modp1024,3des-sha1-modp1024!
esp=aes256-sha1,3des-sha1!
Solution
You need to specify the Cipher in NEVPNProtocolIKEv2 instance that is supported by VPN Server.
IKEv2Protocol.ikeSecurityAssociationParameters.encryptionAlgorithm = .algorithmAES256
IKEv2Protocol.ikeSecurityAssociationParameters.integrityAlgorithm = .SHA96
IKEv2Protocol.ikeSecurityAssociationParameters.diffieHellmanGroup = .group2
IKEv2Protocol.ikeSecurityAssociationParameters.lifetimeMinutes = 480
IKEv2Protocol.childSecurityAssociationParameters.encryptionAlgorithm = .algorithmAES256
IKEv2Protocol.childSecurityAssociationParameters.integrityAlgorithm = .SHA96
IKEv2Protocol.childSecurityAssociationParameters.diffieHellmanGroup = .group2
IKEv2Protocol.childSecurityAssociationParameters.lifetimeMinutes = 60
I'm using spring cloud verion 1.0.0.RC1. Getting an error 'Unable to locate ILoadBalancer for service'.
Any thoughts..is something missed. Isn't there a default load balancer used by ribbon.
##########application.yml#############
spring:
cloud:
client:
serviceIds:
- records
records:
ribbon:
listOfClients: http://VISERVER09:8761/eureka-server-0.0.1-SNAPSHOT/eureka,http://VISERVER08:8761/eureka-server-0.0.1-SNAPSHOT/eureka
################ code using service#############
String url = "http://records/ServiceB-0.0.1-SNAPSHOT/records/{record}";
restTemplate.getForObject(url, String.class, "1234");
########## Log Output ###############
--- BaseLoadBalancer: Client:records instantiated a LoadBalancer:DynamicServerListLoadBalancer:{NFLoadBalancer:name=records,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:null
--- ChainedDynamicProperty: Flipping property: records.ribbon.ActiveConnectionsLimitto use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
--- DynamicServerListLoadBalancer: DynamicServerListLoadBalancer for client records initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=records,current list of Servers=[VISERVER08:8181, VISERVER09:8181],Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone; Instance count:2; Active connections count: 0; Circuit breaker tripped count: 0; Active connections per server: 0.0;]
},Server stats: [[Server:VISERVER09:8181; Zone:defaultZone; Total Requests:0; Successive connection failure:0; Total blackout
seconds:0; Last connection made:Thu Jan 01 01:00:00 GMT 1970; First connection made: Thu Jan 01 01:00:00 GMT 1970; Active Connections:0;
total failure count in last (1000) msecs:0; average resp time:0.0; 90 percentile resp time:0.0; 95 percentile resp time:0.0; min resp time:
0.0; max resp time:0.0; stddev resp time:0.0]
, [Server:VISERVER08:8181; Zone:defaultZone; Total Requests:0; Successive connection failure:0; Total blackout seconds:0;
Last connection made:Thu Jan 01 01:00:00 GMT 1970; First connection made: Thu Jan 01 01:00:00 GMT 1970; Active Connections:0; total failure
count in last (1000) msecs:0; average resp time:0.0; 90 percentile resp time:0.0; 95 percentile resp time:0.0; min resp time:0.0; max re
sp time:0.0; stddev resp time:0.0]
]}ServerList:DiscoveryEnabledNIWSServerList:; clientName:records; Effective vipAddresses:records; isSecure:false; datacenter:null
--- ConnectionPoolCleaner: Initializing ConnectionPoolCleaner for NFHttpClient:VISERVER08
--- ConnectionPoolCleaner: Initializing ConnectionPoolCleaner for NFHttpClient:VISERVER08
--- [dispatcherServlet]: Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.IllegalStateException: Unable to locate ILoadBalancer for service: VISERVER08] with root cause
**java.lang.IllegalStateException: Unable to locate ILoadBalancer for service: VISERVER08**
at org.springframework.cloud.netflix.ribbon.RibbonLoadBalancerClient.getServer(RibbonLoadBalancerClient.java:77)
at org.springframework.cloud.netflix.ribbon.RibbonLoadBalancerClient.execute(RibbonLoadBalancerClient.java:45)
at org.springframework.cloud.netflix.ribbon.RibbonInterceptor.intercept(RibbonInterceptor.java:30)
at org.springframework.http.client.InterceptingClientHttpRequest$RequestExecution.execute(InterceptingClientHttpRequest.java:84)
at org.springframework.http.client.InterceptingClientHttpRequest.executeInternal(InterceptingClientHttpRequest.java:69)
at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48)
at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:53)
It looks like you got 2 RibbonInterceptors in your RestTemplate. The second one is translating the physical hostname and getting nothing back from the service registry.
I'm trying to make a simple api application using beego. During the stress test, there was an unexpected problem. Before ~16400 requests everything executes at fantastic speed. After 16400 queries almost all stops, runs 1-2 requests per second. I have a feeling that beego can not allocate a connection to the database. I tried to change maxIdle, maxConn parameters but no effect.
UPD. the same problem with other databases
MainController:
package controllers
import (
models "github.com/Hepri/taxi/models"
"github.com/astaxie/beego"
"github.com/astaxie/beego/orm"
)
type MainController struct {
beego.Controller
}
func (c *MainController) Get() {
o := orm.NewOrm()
app := models.ApiApp{}
err := o.Read(&app)
if err == orm.ErrMissPK {
// do nothing
}
c.ServeJson()
}
Model:
package models
const (
CompanyAccessTypeAll = 1
CompanyAccessTypeSpecific = 2
)
type ApiApp struct {
Id int `orm:"auto"`
Token string `orm:"size(100)"`
}
func (a *ApiApp) TableName() string {
return "api_apps"
}
main.go:
package main
import (
models "github.com/Hepri/taxi/models"
_ "github.com/Hepri/taxi/routers"
"github.com/astaxie/beego"
"github.com/astaxie/beego/orm"
_ "github.com/lib/pq"
)
func main() {
orm.RegisterDriver("postgres", orm.DR_Postgres)
orm.RegisterDataBase("default", "postgres", "user=test password=123456 dbname=test sslmode=disable")
orm.RegisterModel(new(models.ApiApp))
beego.EnableAdmin = true
orm.RunCommand()
beego.Run()
}
before reach ~16400:
Benchmarking localhost (be patient)
^C
Server Software: beegoServer:1.4.2
Server Hostname: localhost
Server Port: 8080
Document Path: /
Document Length: 4 bytes
Concurrency Level: 10
Time taken for tests: 3.844 seconds
Complete requests: 16396
Failed requests: 0
Write errors: 0
Total transferred: 2492192 bytes
HTML transferred: 65584 bytes
Requests per second: 4264.91 [#/sec] (mean)
Time per request: 2.345 [ms] (mean)
Time per request: 0.234 [ms] (mean, across all concurrent requests)
Transfer rate: 633.07 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 2.2 0 275
Processing: 0 2 10.9 1 370
Waiting: 0 1 8.6 1 370
Total: 0 2 11.1 2 370
Percentage of the requests served within a certain time (ms)
50% 2
66% 2
75% 2
80% 2
90% 2
95% 3
98% 3
99% 4
100% 370 (longest request)
after reach ~16400:
Benchmarking localhost (be patient)
^C
Server Software: beegoServer:1.4.2
Server Hostname: localhost
Server Port: 8080
Document Path: /
Document Length: 4 bytes
Concurrency Level: 10
Time taken for tests: 15.534 seconds
Complete requests: 16392
Failed requests: 0
Write errors: 0
Total transferred: 2491584 bytes
HTML transferred: 65568 bytes
Requests per second: 1055.22 [#/sec] (mean)
Time per request: 9.477 [ms] (mean)
Time per request: 0.948 [ms] (mean, across all concurrent requests)
Transfer rate: 156.63 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 11
Processing: 0 2 16.7 1 614
Waiting: 0 1 15.7 1 614
Total: 0 2 16.7 1 614
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 2
80% 2
90% 2
95% 2
98% 3
99% 3
100% 614 (longest request)
same picture even after 30 seconds
Benchmarking localhost (be patient)
^C
Server Software: beegoServer:1.4.2
Server Hostname: localhost
Server Port: 8080
Document Path: /
Document Length: 4 bytes
Concurrency Level: 10
Time taken for tests: 25.585 seconds
Complete requests: 16391
Failed requests: 0
Write errors: 0
Total transferred: 2491432 bytes
HTML transferred: 65564 bytes
Requests per second: 640.65 [#/sec] (mean)
Time per request: 15.609 [ms] (mean)
Time per request: 1.561 [ms] (mean, across all concurrent requests)
Transfer rate: 95.10 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 10.1 0 617
Processing: 0 2 16.2 1 598
Waiting: 0 1 11.1 1 597
Total: 0 2 19.1 1 618
Percentage of the requests served within a certain time (ms)
50% 1
66% 2
75% 2
80% 2
90% 2
95% 2
98% 3
99% 3
100% 618 (longest request)
I am using ubuntu 12, nginx, uwsgi 1.9 with socket, django 1.5.
Config:
[uwsgi]
base_path = /home/someuser/web/
module = server.manage_uwsgi
uid = www-data
gid = www-data
virtualenv = /home/someuser
master = true
vacuum = true
harakiri = 20
harakiri-verbose = true
log-x-forwarded-for = true
profiler = true
no-orphans = true
max-requests = 10000
cpu-affinity = 1
workers = 4
reload-on-as = 512
listen = 3000
Client tests from Windows7:
C:\Users\user>C:\AppServ\Apache2.2\bin\ab.exe -c 255 -n 5000 http://www.someweb.com/about/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/
Benchmarking www.someweb.com (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Finished 5000 requests
Server Software: nginx
Server Hostname: www.someweb.com
Server Port: 80
Document Path: /about/
Document Length: 1881 bytes
Concurrency Level: 255
Time taken for tests: 66.669814 seconds
Complete requests: 5000
Failed requests: 1
(Connect: 1, Length: 0, Exceptions: 0)
Write errors: 0
Total transferred: 10285000 bytes
HTML transferred: 9405000 bytes
Requests per second: 75.00 [#/sec] (mean)
Time per request: 3400.161 [ms] (mean)
Time per request: 13.334 [ms] (mean, across all concurrent requests)
Transfer rate: 150.64 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 8 207.8 1 9007
Processing: 10 3380 11480.5 440 54421
Waiting: 6 1060 3396.5 271 48424
Total: 11 3389 11498.5 441 54423
Percentage of the requests served within a certain time (ms)
50% 441
66% 466
75% 499
80% 519
90% 3415
95% 36440
98% 54407
99% 54413
100% 54423 (longest request)
I have set following options too:
echo 3000 > /proc/sys/net/core/netdev_max_backlog
echo 3000 > /proc/sys/net/core/somaxconn
So,
1) I make first 3000 requests super fast. I see progress in ab and in uwsgi requests logs -
[pid: 5056|app: 0|req: 518/4997] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
[pid: 5052|app: 0|req: 512/4998] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
[pid: 5054|app: 0|req: 353/4999] 80.114.157.139 () {30 vars in 378 bytes} [Thu Mar 21 12:37:31 2013] GET /about/ => generated 1881 bytes in 4 msecs (HTTP/1.0 200) 3 headers in 105 bytes (1 switches on core 0)
I dont have any broken pipes or worker respawns.
2) Next requests are running very slow or with some timeout. Looks like that some buffer becomes full and I am waiting before it becomes empty.
3) Some buffer becomes empty.
4) ~500 requests are processed super fast.
5) Some timeout.
6) see Nr. 4
7) see Nr. 5
8) see Nr. 4
9) see Nr. 5
....
....
Need your help
check with netstat and dmesg. You have probably exhausted ephemeral ports or filled the conntrack table.