I've got an AWS EKS environment that i'm just setting up and I'm getting 400 bad request.
Below is my config and I wanted to ask if anyone saw anything I should change.
I can see the requests are getting through the aws NLB as it reaches the nginx ingress controller but I can't see any decent information in the ingress controller logs. will add some below.
I'm terminating 443 at the NLB so sending http/80 to controller...
W0408 01:06:35.428413 7 controller.go:1094] Service "dating/photoapi-prod" does not have any active Endpoint.
W0408 01:06:35.428682 7 controller.go:1094] Service "dating/photoapi-test" does not have any active Endpoint.
192.168.119.209 - - [08/Apr/2022:01:50:55 +0000] "\x00" 400 150 "-" "-" 0 0.000 [] [] - - - - 1d65f0090db1addb14e870d877977bfc
192.168.119.209 - - [08/Apr/2022:01:50:59 +0000] "\x00" 400 150 "-" "-" 0 0.000 [] [] - - - - b853574052dfd56745839f72a6fc5ed1
192.168.90.222 - - [08/Apr/2022:01:50:59 +0000] "\x00" 400 150 "-" "-" 0 0.000 [] [] - - - - c38d59e8ffcb008cf01ab4bb3ea4cd39
192.168.90.222 - - [08/Apr/2022:01:51:00 +0000] "\x00" 400 150 "-" "-" 0 0.000 [] [] - - - - 3ca1bfdbb1f35c7b89d213636a12e864
192.168.119.209 - - [08/Apr/2022:01:51:05 +0000] "\x00" 400 150 "-" "-" 0 0.000 [] [] - - - - 338a2445058641d71c32e9acdf467118
As per error it says the service behind the ingress controller doesn’t have active container running behind it.
Do you have container running behind the services mentioned in error?
in my case, it fixed after remove unused listening port in LB.
I thinks it occured by health check.
We have configured the istio 1.4.0 with demo profile on Kubernetes cluster 1.15.1. It was working as expected but after some time facing issues with the application which are connecting to backend servers like mongo DB. Application pod is going in crashloopbackup and if i disabled istio it works properly.
Upon checking the istio-proxy logs found lines stating http/1.1 DPE and mongo DB IP and port number
Below is the Istio-proxy logs (sidecar),
#
[2020-03-11T13:40:28.504Z] "- - HTTP/1.1" 0 DPE "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - <mongo IP>:27017 10.233.92.103:49412 - -
[2020-03-11T13:40:28.508Z] "- - HTTP/1.1" 0 DPE "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - <mongo IP>:27017 10.233.92.103:52062 - -
[2020-03-11T13:40:28.528Z] "- - HTTP/1.1" 0 DPE "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - <mongo IP>:27017 10.233.92.103:37182 - -
[2020-03-11T13:40:28.529Z] "- - HTTP/1.1" 0 DPE "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - <mongo IP>:27017 10.233.92.103:49428 - -
[2020-03-11T13:40:28.530Z] "- - HTTP/1.1" 0 DPE "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - 10.26.61.18:27017 10.233.92.103:52078 - -
[2020-03-11T13:40:28.569Z] "POST /intake/v2/events HTTP/1.1" 202 - "-" "-" 941 0 3 1 "-" "elasticapm-node/3.3.0 elastic-apm-http-client/9.3.0 node/10.12.0" "8954f0a1-709b-963c-a480-05b078955c89" "<apm>:8200" "10.26.61.45:8200" PassthroughCluster - <apm>:8200 10.233.92.103:49992 - -
[2020-03-11T13:40:28.486Z] "- - -" 0 - "-" "-" 47 3671 98 - "-" "-" "-" "-" "<redis>:6379" PassthroughCluster 10.233.92.103:37254 <redis>:6379 10.233.92.103:37252 - -
[2020-03-11T13:40:30.168Z] "- - -" 0 - "-" "-" 632 1212236 104 - "-" "-" "-" "-" "104.16.25.35:443" PassthroughCluster 10.233.92.103:60760 104.16.25.35:443 10.233.92.103:60758 - -```
#
and application logs giving error as below
{ err: 'socketHandler', trace: '', bin: undefined, parseState: { sizeOfMessage: 1347703880, bytesRead: undefined, stubBuffer: undefined } }
The issue has been resolved.
RCA :-
I have manually created service and endpoint of MongoDB with port name as http
After that when i checked the listeners in proxy-config via istioctl command i found an entry with address 0.0.0.0 and port 27017
ADDRESS PORT TYPE
0.0.0.0 27017 TCP
In json output, i interpret that its going into blackholecluster even if i set allow_any in passthroughcluster.
The output in istio-proxy always give me DPE error.
After understanding the issue i changed the name from http to http1 and it worked properly.
Now need to understand why the name http was creating so much issue
Might be this issue: https://github.com/kubernetes/enhancements/issues/753
Basically, about ordering of containers. At startup time if your application container required networking and sidecar container(envoy proxy) not fully started, then application will raise an networking error.
Reference: https://discuss.istio.io/t/k8s-istio-sidecar-injection-with-other-init-containers/845
I am trying to create a Couchbase cluster in GKE with Istio (envoy proxy) using the Autonomous Operator 1.1.
The operator starts up fine, and after running the yaml to create the couchbasecluster the first node starts up and then the 2nd node starts. Issue is that the 2nd node appears to fail to join the cluster and additional nodes are not being started.
I am not sure how to debug what is happening or what needs to be done to get the cluster to start up in my gke cluster. Any assistance is appreceated.
Thank you
Here are some of the logs from one of the couchbase node pods:
I [2019-04-02T14:58:00.706Z] "POST /engageCluster2HTTP/1.1" 404 NR 0 0 0 - "-" "-" "782bde60-c611-4bfb-a0f4-9975300c71a4" "cb-example-0003.cb-example.couchbase.svc:8091" "-" - - 10.36.9.15:8091 10.36.8.13:37221
I [2019-04-02T14:58:05.706Z] "POST /engageCluster2HTTP/1.1" 404 NR 0 0 0 - "-" "-" "382b6163-e8bc-4259-baaa-e854c36af1bd" "cb-example-0003.cb-example.couchbase.svc:8091" "-" - - 10.36.9.15:8091 10.36.8.13:55515
I [2019-04-02T14:58:10.707Z] "POST /engageCluster2HTTP/1.1" 404 NR 0 0 0 - "-" "-" "390e417e-b179-4bbf-81d8-02cc28d2bc98" "cb-example-0003.cb-example.couchbase.svc:8091" "-" - - 10.36.9.15:8091 10.36.8.13:34377
I [2019-04-02T14:53:13.605Z] - 210 4281 300015 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45756 10.36.8.13:8091 10.36.9.12:49792
I [2019-04-02T14:58:15.709Z] "POST /engageCluster2HTTP/1.1" 404 NR 0 0 0 - "-" "-" "037d1791-9feb-47be-b699-10269aaf36e9" "cb-example-0003.cb-example.couchbase.svc:8091" "-" - - 10.36.9.15:8091 10.36.8.13:55307
I [2019-04-02T14:58:20.708Z] "POST /engageCluster2HTTP/1.1" 404 NR 0 0 0 - "-" "-" "5ca29b59-ff25-4a13-a0c1-62668d40c681" "cb-example-0003.cb-example.couchbase.svc:8091" "-" - - 10.36.9.15:8091 10.36.8.13:51205
I [2019-04-02T14:58:25.706Z] "POST /engageCluster2HTTP/1.1" 404 NR 0 0 0 - "-" "-" "9e21bc4d-1367-4d25-b674-39ae6341c9b4" "cb-example-0003.cb-example.couchbase.svc:8091" "-" - - 10.36.9.15:8091 10.36.8.13:41435
I [2019-04-02T14:58:30.710Z] "POST /engageCluster2HTTP/1.1" 404 NR 0 0 0 - "-" "-" "c2f8e866-e0a5-43ff-b54f-e5c504b17cdf" "cb-example-0003.cb-example.couchbase.svc:8091" "-" - - 10.36.9.15:8091 10.36.8.13:40203
I [2019-04-02T14:58:35.708Z] "POST /engageCluster2HTTP/1.1" 404 NR 0 0 0 - "-" "-" "4b02e855-cc72-49dc-99e1-a8644fdf1af8" "cb-example-0003.cb-example.couchbase.svc:8091" "-" - - 10.36.9.15:8091 10.36.8.13:56433
I [2019-04-02T14:53:13.641Z] - 16628 40061 324989 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:45760 10.36.8.13:8091 10.36.9.12:49796
I [2019-04-02T14:56:45.698Z] - 9490 13635 112934 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:46218 10.36.8.13:8091 10.36.9.12:50534
I [2019-04-02T14:56:45.665Z] - 210 4281 112967 "127.0.0.1:8091" inbound|8091||cb-example.couchbase.svc.cluster.local 127.0.0.1:46216 10.36.8.13:8091 10.36.9.12:50528
And a portion of the error.log from inside the couchbase container.
[ns_server:error,2019-04-03T16:09:47.398Z,ns_1#cb-example-0000.cb-example.couchbase.svc:service_agent-index<0.24974.68>:service_agent:handle_call:182]Got rebalance-only call {if_rebalance,<0.23572.68>,unset_rebalancer} that doesn't match rebalancer pid undefined
[ns_server:error,2019-04-03T16:09:47.398Z,ns_1#cb-example-0000.cb-example.couchbase.svc:service_rebalancer-index<0.23572.68>:service_agent:process_bad_results:810]Service call unset_rebalancer (service index) failed on some nodes:
[{'ns_1#cb-example-0000.cb-example.couchbase.svc',nack}]
[ns_server:error,2019-04-03T16:09:47.398Z,ns_1#cb-example-0000.cb-example.couchbase.svc:cleanup_process<0.23562.68>:service_janitor:maybe_init_topology_aware_service:87]Initial rebalance for `index` failed: {error,
{initial_rebalance_failed,index,
{linked_process_died,<0.23516.68>,
{no_connection,
"index-service_api"}}}}
[ns_server:error,2019-04-03T16:10:47.399Z,ns_1#cb-example-0000.cb-example.couchbase.svc:<0.24979.68>:service_agent:wait_for_connection_loop:299]No connection with label "index-service_api" after 60000ms. Exiting.
[ns_server:error,2019-04-03T16:10:47.399Z,ns_1#cb-example-0000.cb-example.couchbase.svc:service_agent-index<0.24974.68>:service_agent:handle_info:231]Linked process <0.24979.68> died with reason {no_connection,
"index-service_api"}. Terminating
[ns_server:error,2019-04-03T16:10:47.399Z,ns_1#cb-example-0000.cb-example.couchbase.svc:service_agent-index<0.24974.68>:service_agent:terminate:260]Terminating abnormally
[ns_server:error,2019-04-03T16:10:47.399Z,ns_1#cb-example-0000.cb-example.couchbase.svc:service_rebalancer-index<0.25043.68>:service_rebalancer:run_rebalance:82]Agent terminated during the rebalance: {'DOWN',#Ref<0.0.48.97712>,process,
<0.24974.68>,
{linked_process_died,<0.24979.68>,
{no_connection,"index-service_api"}}}
[ns_server:error,2019-04-03T16:10:47.400Z,ns_1#cb-example-0000.cb-example.couchbase.svc:service_agent-index<0.26461.68>:service_agent:handle_call:182]Got rebalance-only call {if_rebalance,<0.25043.68>,unset_rebalancer} that doesn't match rebalancer pid undefined
[ns_server:error,2019-04-03T16:10:47.400Z,ns_1#cb-example-0000.cb-example.couchbase.svc:service_rebalancer-index<0.25043.68>:service_agent:process_bad_results:810]Service call unset_rebalancer (service index) failed on some nodes:
[{'ns_1#cb-example-0000.cb-example.couchbase.svc',nack}]
[ns_server:error,2019-04-03T16:10:47.400Z,ns_1#cb-example-0000.cb-example.couchbase.svc:cleanup_process<0.25042.68>:service_janitor:maybe_init_topology_aware_service:87]Initial rebalance for `index` failed: {error,
{initial_rebalance_failed,index,
{linked_process_died,<0.24979.68>,
{no_connection,
"index-service_api"}}}}
[ns_server:error,2019-04-03T16:11:47.401Z,ns_1#cb-example-0000.cb-example.couchbase.svc:<0.26456.68>:service_agent:wait_for_connection_loop:299]No connection with label "index-service_api" after 60000ms. Exiting.
[ns_server:error,2019-04-03T16:11:47.401Z,ns_1#cb-example-0000.cb-example.couchbase.svc:service_agent-index<0.26461.68>:service_agent:handle_info:231]Linked process <0.26456.68> died with reason {no_connection,
"index-service_api"}. Terminating
[ns_server:error,2019-04-03T16:11:47.401Z,ns_1#cb-example-0000.cb-example.couchbase.svc:service_agent-index<0.26461.68>:service_agent:terminate:260]Terminating abnormally
[ns_server:error,2019-04-03T16:11:47.401Z,ns_1#cb-example-0000.cb-example.couchbase.svc:service_rebalancer-index<0.26515.68>:service_rebalancer:run_rebalance:82]Agent terminated during the rebalance: {'DOWN',#Ref<0.0.48.106235>,process,
<0.26461.68>,
{linked_process_died,<0.26456.68>,
{no_connection,"index-service_api"}}}
[ns_server:error,2019-04-03T16:11:47.402Z,ns_1#cb-example-0000.cb-example.couchbase.svc:service_agent-index<0.27939.68>:service_agent:handle_call:182]Got rebalance-only call {if_rebalance,<0.26515.68>,unset_rebalancer} that doesn't match rebalancer pid undefined
[ns_server:error,2019-04-03T16:11:47.402Z,ns_1#cb-example-0000.cb-example.couchbase.svc:service_rebalancer-index<0.26515.68>:service_agent:process_bad_results:810]Service call unset_rebalancer (service index) failed on some nodes:
[{'ns_1#cb-example-0000.cb-example.couchbase.svc',nack}]
[ns_server:error,2019-04-03T16:11:47.402Z,ns_1#cb-example-0000.cb-example.couchbase.svc:cleanup_process<0.26517.68>:service_janitor:maybe_init_topology_aware_service:87]Initial rebalance for `index` failed: {error,
{initial_rebalance_failed,index,
{linked_process_died,<0.26456.68>,
{no_connection,
"index-service_api"}}}}
And this is from the current portion of the couchbase-operator log:
I [2019-04-03T16:15:13.959Z] "GET /poolsHTTP/1.1" 404 NR 0 0 0 - "-" "Go-http-client/1.1" "cc976505-818a-4930-9fc8-8bdcb047185d" "cb-example-0000.cb-example.couchbase.svc:8091" "-" - - 10.36.8.13:8091 10.36.9.12:59280
I [2019-04-03T16:15:13.963Z] "GET /poolsHTTP/1.1" 404 NR 0 0 0 - "-" "Go-http-client/1.1" "bfd981b2-9356-4132-a7f8-2a6c0d8ba15f" "cb-example-0003.cb-example.couchbase.svc:8091" "-" - - 10.36.9.15:8091 10.36.9.12:57624
I [2019-04-03T16:15:14.939Z] - 119 135 0 "127.0.0.1:8080" inbound|8080||mgmtCluster 127.0.0.1:37568 10.36.9.12:8080 10.36.9.1:44810
I [2019-04-03T16:15:17.939Z] - 119 135 0 "127.0.0.1:8080" inbound|8080||mgmtCluster 127.0.0.1:37574 10.36.9.12:8080 10.36.9.1:44816
I [2019-04-03T16:15:18.959Z] "GET /poolsHTTP/1.1" 404 NR 0 0 0 - "-" "Go-http-client/1.1" "997c061a-d5d2-425d-b123-bf76073d148a" "cb-example-0000.cb-example.couchbase.svc:8091" "-" - - 10.36.8.13:8091 10.36.9.12:59298
I [2019-04-03T16:15:18.962Z] "GET /poolsHTTP/1.1" 404 NR 0 0 0 - "-" "Go-http-client/1.1" "b03def85-726f-4107-8b27-9fc8b5bddea7" "cb-example-0003.cb-example.couchbase.svc:8091" "-" - - 10.36.9.15:8091 10.36.9.12:57642
I [2019-04-03T16:15:20.939Z] - 119 135 0 "127.0.0.1:8080" inbound|8080||mgmtCluster 127.0.0.1:37586 10.36.9.12:8080 10.36.9.1:44828
E time="2019-04-03T16:15:26Z" level=warning msg="cluster status: failed with error [Get http://cb-example-0000.cb-example.couchbase.svc:8091/pools/default: uuid check: unexpected status code '404 Not Found' from cb-example-0000.cb-example.couchbase.svc:8091], [Get http://cb-example-0003.cb-example.couchbase.svc:8091/pools/default: uuid check: unexpected status code '404 Not Found' from cb-example-0003.cb-example.couchbase.svc:8091] ...retrying" cluster-name=cb-example module=cluster
E time="2019-04-03T16:15:31Z" level=warning msg="cluster status: failed with error [Get http://cb-example-0000.cb-example.couchbase.svc:8091/pools/default: uuid check: unexpected status code '404 Not Found' from cb-example-0000.cb-example.couchbase.svc:8091], [Get http://cb-example-0003.cb-example.couchbase.svc:8091/pools/default: uuid check: unexpected status code '404 Not Found' from cb-example-0003.cb-example.couchbase.svc:8091] ...retrying" cluster-name=cb-example module=cluster
I [2019-04-03T16:15:23.939Z] - 119 135 0 "127.0.0.1:8080" inbound|8080||mgmtCluster 127.0.0.1:37592 10.36.9.12:8080 10.36.9.1:44834
I [2019-04-03T16:15:26.939Z] - 119 135 0 "127.0.0.1:8080" inbound|8080||mgmtCluster 127.0.0.1:37604 10.36.9.12:8080 10.36.9.1:44846
I [2019-04-03T16:15:26.987Z] "GET /poolsHTTP/1.1" 404 NR 0 0 0 - "-" "Go-http-client/1.1" "c20d057e-646d-4eb3-8931-a220126c27d5" "cb-example-0000.cb-example.couchbase.svc:8091" "-" - - 10.36.8.13:8091 10.36.9.12:59326
I [2019-04-03T16:15:26.991Z] "GET /poolsHTTP/1.1" 404 NR 0 0 0 - "-" "Go-http-client/1.1" "c46e9753-1249-4b8c-8fc9-889e74a0d70b" "cb-example-0003.cb-example.couchbase.svc:8091" "-" - - 10.36.9.15:8091 10.36.9.12:57670
I [2019-04-03T16:15:29.939Z] - 119 135 1 "127.0.0.1:8080" inbound|8080||mgmtCluster 127.0.0.1:37616 10.36.9.12:8080 10.36.9.1:44858
I [2019-04-03T16:15:31.986Z] "GET /poolsHTTP/1.1" 404 NR 0 0 0 - "-" "Go-http-client/1.1" "30c8522f-5797-488c-b234-d1c5a43d9826" "cb-example-0000.cb-example.couchbase.svc:8091" "-" - - 10.36.8.13:8091 10.36.9.12:59338
I [2019-04-03T16:15:31.990Z] "GET /poolsHTTP/1.1" 404 NR 0 0 0 - "-" "Go-http-client/1.1" "a4f74a5e-c026-46b4-b9e3-8b20f65477b4" "cb-example-0003.cb-example.couchbase.svc:8091" "-" - - 10.36.9.15:8091 10.36.9.12:57682
E time="2019-04-03T16:15:36Z" level=warning msg="cluster status: failed with error [Get http://cb-example-0000.cb-example.couchbase.svc:8091/pools/default: uuid check: unexpected status code '404 Not Found' from cb-example-0000.cb-example.couchbase.svc:8091], [Get http://cb-example-0003.cb-example.couchbase.svc:8091/pools/default: uuid check: unexpected status code '404 Not Found' from cb-example-0003.cb-example.couchbase.svc:8091] ...retrying" cluster-name=cb-example module=cluster
E time="2019-04-03T16:15:41Z" level=warning msg="cluster status: failed with error [Get http://cb-example-0000.cb-example.couchbase.svc:8091/pools/default: uuid check: unexpected status code '404 Not Found' from cb-example-0000.cb-example.couchbase.svc:8091], [Get http://cb-example-0003.cb-example.couchbase.svc:8091/pools/default: uuid check: unexpected status code '404 Not Found' from cb-example-0003.cb-example.couchbase.svc:8091] ...retrying" cluster-name=cb-example module=cluster
I [2019-04-03T16:15:32.939Z] - 119 135 0 "127.0.0.1:8080" inbound|8080||mgmtCluster 127.0.0.1:37624 10.36.9.12:8080 10.36.9.1:44866
I found some strange call in my website access.log
This call come from where ? And How can i block it ?
> 40.77.167.156 - - [06/Jan/2019:14:18:16 +0100] "GET /nouvelles/news-k-1/17451-fight-card-glory-world-series-avec-semmy-schilt-vs-brice-guidon
> HTTP/1.1" 404 4060 "-" "Mozilla/5.0 (compatible; bingbot/2.0;
> +http://www.bing.com/bingbot.htm)
I am working on an assignment where i need to parse a log file and create a website based on said log file. one of the requirements is that i count the number of hits that happened on yesterdays, im lost when it comes to this ive attached my code and the log file im working with hoping that someone can offer some advice, thanks
#!/usr/bin/perl
use strict;
use warnings;
use Time::Piece;
use Time::Seconds qw(ONE_DAY);
my $yesterday = localtime() - ONE_DAY();
print $yesterday;
open(LOGFILE,"<", "access.log")or die"Could not open log file.";
my $yesterdayHits=0;
my $totalhits=0;
my $webPage='log.html';
open(WEBPAGE,">",$webPage);
print WEBPAGE ("<HEAD><TITLE>Access Counts</TITLE></HEAD>");
print WEBPAGE ("<BODY>");
print WEBPAGE ("<H1> today is: ",scalar(localtime), "</H1>");
print WEBPAGE ("<h3>Yesterday was $yesterday</h3>");
print WEBPAGE ("<TABLE BORDER=1 CELLPADDING=10 width='500px'>");
foreach my $line (<LOGFILE>) {
$totalhits++;
my $w = "(.+?)";
$line =~ m/^$w $w $w \[$w:$w $w\] "$w $w $w" $w $w/;
my $site = $1;
my $logName = $2;
my $fullName = $3;
my $date = $4;
my $time = $5;
my $gmt = $6;
my $req = $7;
my $file = $8;
my $proto = $9;
my $status = $10;
my $length = $11;
#if($line =~ m/$yesterday/){$yesterdayHits++}
print WEBPAGE ("<Tr><TD>$site</TD><TD>$line</TD></Tr>\n\n");
}
close(LOGFILE);
print WEBPAGE ("<h2>Total hits: $totalhits</h2>");
print WEBPAGE ("<h3>Hits Yesterday: $yesterdayHits</h3>");
print WEBPAGE ("</TABLE></P>");
print WEBPAGE ("</BODY></HTML>");
close(WEBPAGE);
Access log
66.249.65.107 - - [11/Nov/2012:19:33:01 -0400] "GET /support.html HTTP/1.1" 200 11179
111.111.111.111 - - [11/Nov/2012:19:33:01 -0400] "GET / HTTP/1.1" 200 10801
111.111.111.111 - - [08/Oct/2007:11:17:55 -0400] "GET /style.css HTTP/1.1" 200 3225
123.123.123.123 - - [26/Apr/2000:00:23:48 -0400] "GET /pics/wpaper.gif HTTP/1.0" 200 6248
123.123.123.123 - - [26/Apr/2000:00:23:40 -0400] "GET /asctortf/ HTTP/1.0" 200 8130
123.123.123.123 - - [26/Apr/2000:00:23:48 -0400] "GET /pics/5star2000.gif HTTP/1.0" 200 4005
123.123.123.123 - - [26/Apr/2000:00:23:50 -0400] "GET /pics/5star.gif HTTP/1.0" 200 1031
123.123.123.123 - - [26/Apr/2000:00:23:51 -0400] "GET /pics/a2hlogo.jpg HTTP/1.0" 200 4282
123.123.123.123 - - [26/Apr/2000:00:23:51 -0400] "GET /cgi-bin/newcount?jafsof3&width=4&font=digital&noshow HTTP/1.0" 200 36
172.16.130.42 - - [26/Apr/2000:00:00:12 -0400] "GET /contacts.html HTTP/1.0" 200 4595
10.0.1.3 - - [26/Apr/2000:00:17:19 -0400] "GET /news/news.html HTTP/1.0" 200 16716
129.21.109.81 - - [26/Apr/2000:00:16:12 -0400] "GET /download/windows/asctab31.zip HTTP/1.0" 200 1540096
192.168.198.92 - - [22/Dec/2002:23:08:37 -0400] "GET / HTTP/1.1" 200 6394
192.168.198.92 - - [22/Dec/2002:23:08:38 -0400] "GET /images/logo.gif HTTP/1.1" 200 807
192.168.72.177 - - [22/Dec/2002:23:32:14 -0400] "GET /news/sports.html HTTP/1.1" 200 3500
192.168.72.177 - - [22/Dec/2002:23:32:14 -0400] "GET /favicon.ico HTTP/1.1" 404 1997
192.168.72.177 - - [04/Nov/2012:23:32:15 -0400] "GET /style.css HTTP/1.1" 200 4138
192.168.72.177 - - [22/Dec/2002:23:32:16 -0400] "GET /js/ads.js HTTP/1.1" 200 10229
192.168.72.177 - - [22/Dec/2002:23:32:19 -0400] "GET /search.php HTTP/1.1" 400 1997
127.0.0.1 - - [10/Apr/2007:10:39:11 +0300] "GET / HTTP/1.1" 500 606
127.0.0.1 - - [10/Apr/2007:10:39:11 +0300] "GET /favicon.ico HTTP/1.1" 200 766
139.12.0.2 - - [10/Apr/2007:10:40:54 +0300] "GET / HTTP/1.1" 500 612
139.12.0.2 - - [10/Apr/2007:10:40:54 +0300] "GET /favicon.ico HTTP/1.1" 200 766
127.0.0.1 - - [10/Apr/2007:10:53:10 +0300] "GET / HTTP/1.1" 500 612
127.0.0.1 - - [10/Apr/2007:10:54:08 +0300] "GET / HTTP/1.0" 200 3700
127.0.0.1 - - [10/Apr/2007:10:54:08 +0300] "GET /style.css HTTP/1.1" 200 614
127.0.0.1 - - [10/Apr/2007:10:54:08 +0300] "GET /img/pti-round.jpg HTTP/1.1" 200 17524
127.0.0.1 - - [10/Apr/2007:10:54:21 +0300] "GET /unix_sysadmin.html HTTP/1.1" 200 3880
217.0.22.3 - - [04/Nov/2012:10:54:51 +0300] "GET / HTTP/1.1" 200 34
217.0.22.3 - - [10/Apr/2007:10:54:51 +0300] "GET /favicon.ico HTTP/1.1" 200 11514
217.0.22.3 - - [10/Apr/2007:10:54:53 +0300] "GET /cgi/pti.pl HTTP/1.1" 500 617
127.0.0.1 - - [10/Apr/2007:10:54:08 +0300] "GET / HTTP/0.9" 200 3700
217.0.22.3 - - [10/Apr/2007:10:58:27 +0300] "GET / HTTP/1.1" 200 3700
217.0.22.3 - - [10/Apr/2007:10:58:34 +0300] "GET /unix_sysadmin.html HTTP/1.1" 200 3880
217.0.22.3 - - [10/Apr/2007:10:58:45 +0300] "GET /talks/Fundamentals/read-excel-file.html HTTP/1.1" 404 311
use POSIX;
$yesterday = strftime("%d/%b/%Y",localtime(time()-86400));
$yesterday now contains yesterday's date in the logfile's format (e.g. "11/Nov/2012"). You can filter lines by checking $line =~ /$yesterday/;
http://perldoc.perl.org/POSIX.html
A basic approach is to modify the $yesterday variable (to be the same as logfile) like this :
$yesterday =~ s!\w+\s+(\w+)\s+(\d+)\s+\d{2}:\d{2}:\d{2}\s+(\d+)!$2/$1/$3!;
now you can de-comment and change the line
if ($date eq $yesterday) { $yesterdayHits++ }
to start counting.
Not-so-fast but quite precise version; uses UNIX timestamps:
#!/usr/bin/env perl
use strict;
use warnings 'all';
use HTTP::Date;
while (<>) {
my $stamp;
print if
m{\s+\[(.+?)\]\s+\"}x
and $stamp = str2time($1)
and $stamp > time - 86_400 * 2
and $stamp < time - 86_400;
}