I am running two pods for apache and 4 pods for Tomcat(both pods are running in the same namespace). I expose(services) the apache and tomcat port(80,8080) to outside the world.
When I try to call the apache URL the request is not reaching the tomcat pod.
When I check the modjk log I am seeing the tomcat is not up and not running the correct port.
I am using the mod_jk for apache to tomcat communication.
I have given my worker. property file and Jkmount file here. When I directly access the tomcat the application is working fine, I am new to the Kubernetes.
Worker.properties:
worker.list=tomcat,loadbalancer
worker.tomcat.type=ajp13
#worker.myserver.host=127.0.0.1
worker.tomcat.port=8009
worker.tomcat.lbfactor=1
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=tomcat
worker.loadbalancer.sticky_session=1
JK mount:
JkMount /Userinfo/* tomcat
httpd.conf:
JkWorkersFile /opt/apache/conf/worker.properties
Include /opt/apache/conf/jkmount.conf
LoadModule jk_module modules/mod_jk.so
Tomcat AJP:
<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
Message:
[Tue Dec 08 17:52:54.811 2020] [7:140256921777920] [info] jk_open_socket::jk_connect.c (817): connect to 127.0.0.1:8009 failed (errno=111)
[Tue Dec 08 17:52:54.811 2020] [7:140256921777920] [info] ajp_connect_to_endpoint::jk_ajp_common.c (1068): (tomcat) Failed opening socket to (127.0.0.1:8009) (errno=111)
[Tue Dec 08 17:52:54.811 2020] [7:140256921777920] [error] ajp_send_request::jk_ajp_common.c (1728): (tomcat) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=111)
[Tue Dec 08 17:52:54.811 2020] [7:140256921777920] [info] ajp_service::jk_ajp_common.c (2778): (tomcat) sending request to tomcat failed (recoverable), because of error during request sending (attempt=2)
[Tue Dec 08 17:52:54.811 2020] [7:140256921777920] [error] ajp_service::jk_ajp_common.c (2799): (tomcat) connecting to tomcat failed (rc=-3, errors=1, client_errors=0).
[Tue Dec 08 17:52:54.811 2020] [7:140256921777920] [info] jk_handler::mod_jk.c (2995): Service error=-3 for worker=tomcat
Related
I am not quite sure what node is causing this behaviour and there are tooo many flows so I can not install from scratch and yes I do not have a backup of them.
I realized today in the morning that I can not access the http gui of my nodered instance any longer on my raspberrypi zero. Just edited some flows but nothing real serious.
I am trying to start my node red on my Rapsberry PI zere and no GUI and UI is starting up to access the node red instance. I don't know how to solve and troubleshoot this. What I am doing or trying to do is:
pi#nodered-pi:~/.node-red $ node-red-start
Start Node-RED
Once Node-RED has started, point a browser at http://192.168.1.42:1880
On Pi Node-RED works better with the Firefox or Chrome browser
Use node-red-stop to stop Node-RED
Use node-red-start to start Node-RED again
Use node-red-log to view the recent log output
Use sudo systemctl enable nodered.service to autostart Node-RED at every boot
Use sudo systemctl disable nodered.service to disable autostart on boot
To find more nodes and example flows - go to http://flows.nodered.org
Starting as a systemd service.
Started Node-RED graphical event wiring tool.
19 Aug 15:13:55 - [info]
Welcome to Node-RED
===================
19 Aug 15:13:55 - [info] Node-RED version: v0.18.7
19 Aug 15:13:55 - [info] Node.js version: v8.11.1
19 Aug 15:13:55 - [info] Linux 4.14.52+ arm LE
19 Aug 15:14:06 - [info] Loading palette nodes
19 Aug 15:14:37 - [info] Dashboard version 2.9.6 started at /ui
19 Aug 15:14:49 - [warn] ------------------------------------------------------
19 Aug 15:14:49 - [warn] [node-red-contrib-delta-timed/delta-time] 'delta' already registered by module node-red-contrib-change-detect
19 Aug 15:14:49 - [warn] ------------------------------------------------------
19 Aug 15:14:49 - [info] Settings file : /home/pi/.node-red/settings.js
19 Aug 15:14:49 - [info] User directory : /home/pi/.node-red
19 Aug 15:14:49 - [warn] Projects disabled : set editorTheme.projects.enabled=true to enable
19 Aug 15:14:49 - [info] Flows file : /home/pi/.node-red/flows_nodered-pi.json
19 Aug 15:14:50 - [info] Server now running at http://127.0.0.1:1880/
19 Aug 15:14:50 - [warn]
---------------------------------------------------------------------
Your flow credentials file is encrypted using a system-generated key.
If the system-generated key is lost for any reason, your credentials
file will not be recoverable, you will have to delete it and re-enter
your credentials.
You should set your own key using the 'credentialSecret' option in
your settings file. Node-RED will then re-encrypt your credentials
file using your chosen key the next time you deploy a change.
---------------------------------------------------------------------
19 Aug 15:14:50 - [warn] Error loading credentials: SyntaxError: Unexpected token T in JSON at position 0
19 Aug 15:14:50 - [warn] Error loading flows: Error: Failed to decrypt credentials
19 Aug 15:14:51 - [info] Starting flows
19 Aug 15:15:01 - [warn] [telegram receiver:Telegram Receiver] bot not initialized
19 Aug 15:15:01 - [warn] [telegram sender:Temperatur Wetterstation] bot not initialized.
19 Aug 15:15:01 - [error] [function:Versorge mit Information] SyntaxError: Invalid or unexpected token
19 Aug 15:15:01 - [info] Started flows
19 Aug 15:15:02 - [info] [sonoff-server:166ef3ba.0029bc] SONOFF Server Started On Port 1080
19 Aug 15:15:02 - [red] Uncaught Exception:
19 Aug 15:15:02 - Error: listen EACCES 0.0.0.0:443
at Object._errnoException (util.js:1022:11)
at _exceptionWithHostPort (util.js:1044:20)
nodered.service: Main process exited, code=exited, status=1/FAILURE
nodered.service: Unit entered failed state.
nodered.service: Failed with result 'exit-code'.
nodered.service: Service hold-off time over, scheduling restart.
Stopped Node-RED graphical event wiring tool.
Started Node-RED graphical event wiring tool.
19 Aug 15:15:20 - [info]
Welcome to Node-RED
===================
19 Aug 15:15:20 - [info] Node-RED version: v0.18.7
19 Aug 15:15:02 - Error: listen EACCES 0.0.0.0:443
at Object._errnoException (util.js:1022:11)
at _exceptionWithHostPort (util.js:1044:20)
This error implies that something else is already running on port 443. This could be an existing copy of Node-RED or something else. You can search what applications are listening on what ports with the following command
lsof -i :443
This will list what is listening on port 443
I am tring to install kubenetes on debian 9.3, I followed the instructions on this document https://kubernetes.io/docs/setup/independent/install-kubeadm/, it failed to create the cluster with timeout error, the commands I used are as follows:
export HTTP_PROXY=http://192.168.56.1:1080 # this is my internet proxy
export HTTPS_PROXY=http://192.168.56.1:1080
export NO_PROXY=127.0.0.1,192.168.56.*,10.244.*,10.96.*
kubeadm init --apiserver-advertise-address=192.168.56.101 --pod-network-cidr=10.244.0.0/16
the last command hangs up for 1hour and failed with timeout, I found that several container had been running by command docker ps, the running containers included kube-controller-manager-amd64,etcd-amd64,kube-apiserver-amd64,kube-scheduler-amd64,4 instances of pause-amd64.
the error messages are as follows
duler-debvm01_kube-system(660259102d57385a8043d025ac189c87)": Get https://192.168.56.101:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-debvm01: net/http: TLS handshake timeout
Apr 06 21:44:49 DebVM01 kubelet[10665]: E0406 21:44:49.923017 10665 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://192.168.56.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Ddebvm01&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Apr 06 21:44:49 DebVM01 kubelet[10665]: E0406 21:44:49.924966 10665 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://192.168.56.101:6443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Apr 06 21:44:49 DebVM01 kubelet[10665]: E0406 21:44:49.925892 10665 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get xxx/api/v1/pods?fieldSelector=spec.nodeName%3Ddebvm01&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Apr 06 21:44:50 DebVM01 kubelet[10665]: E0406 21:44:50.029333 10665 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node "debvm01" not found
Apr 06 21:44:50 DebVM01 kubelet[10665]: E0406 21:44:50.379543 10665 kubelet_node_status.go:106] Unable to register node "debvm01" with API server: Post xxx: net/http: TLS handshake timeout
Apr 06 21:44:52 DebVM01 kubelet[10665]: E0406 21:44:52.575452 10665 event.go:209] Unable to write event: 'Post xxxx: net/http: TLS handshake timeout' (may retry after sleeping)
Apr 06 21:44:57 DebVM01 kubelet[10665]: I0406 21:44:57.380498 10665 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Apr 06 21:44:57 DebVM01 kubelet[10665]: I0406 21:44:57.430059 10665 kubelet_node_status.go:82] Attempting to register node debvm01
Apr 06 21:45:00 DebVM01 kubelet[10665]: E0406 21:45:00.030635 10665 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node "debvm01" not found
Apr 06 21:45:01 DebVM01 kubelet[10665]: I0406 21:45:01.484580 10665 kubelet_node_status.go:85] Successfully registered node debvm01
the above error messages has been processed and eliminated a lot of repeated lines as follows:
Apr 06 22:46:20 DebVM01 kubelet[10665]: E0406 22:46:20.773690 10665 kubelet.go:2104] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Apr 06 22:46:25 DebVM01 kubelet[10665]: W0406 22:46:25.779141 10665 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Kubernetes v1.9.3
could anyone help me?
kubeadm init --apiserver-advertise-address=192.168.56.101
--pod-network-cidr=10.244.0.0/16
From kubeadm documentation:
--apiserver-advertise-address ip-address The IP address the API Server will advertise it's listening on. Specify '0.0.0.0' to use the
address of the default network interface.
Unless otherwise specified, kubeadm uses the default gateway’s network
interface to advertise the master’s IP. If you want to use a different
network interface, specify --apiserver-advertise-address=ip-address
From kubernetes api-server documentation:
--advertise-address ip-address The IP address on which to advertise the apiserver to members of the cluster. This address must
be reachable by the rest of the cluster. If blank, the --bind-address
will be used. If --bind-address is unspecified, the host's default
interface will be used.
I've done a couple of experiments which confirm that it is necessary for ip-address to be configured (or added as a secondary IP) to one of the master's instance interfaces.
Just double check if that interface is up.
The last error message,
network plugin is not ready: cni config uninitialized
means that kubernetes networking subsystem is absent or broken. Try to install/reinstall it with
kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
This part described in section "(3/4) Installing a pod network" in the document you've mentioned.
If you are stuck, try to reinstall your cluster following this manual.
Background
Apache server running on a machine and producing logs into /var/log/httpd/error_log
Using syslog-ng to send log to a port 5140
Eventually it will be consumed by kafka producer to be send to a topic
Settings
options {
flush_lines (0);
time_reopen (10);
log_fifo_size (1000);
long_hostnames (off);
use_dns (no);
use_fqdn (no);
create_dirs (no);
keep_hostname (no);
};
source s_apache2 {
file("/var/log/httpd/error_log" flags(no-parse));
}
destination loghost {
tcp("*.*.*.*" port(5140));
}
Problem
syslog-ng prepends timestamp and hostname to the log data which is undesirable
<13>Jan 10 11:01:03 hostname [Tue Jan 10 11:01:02 2017] [notice] Digest: generating secret for digest authentication ...
<13>Jan 10 11:01:03 hostname [Tue Jan 10 11:01:02 2017] [notice] Digest: done
<13>Jan 10 11:01:03 hostname [Tue Jan 10 11:01:02 2017] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.4.30 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations
Desired output (Each log line as is from error_log file)
[Tue Jan 10 11:01:02 2017] [notice] Digest: generating secret for digest authentication ...
[Tue Jan 10 11:01:02 2017] [notice] Digest: done
[Tue Jan 10 11:01:02 2017] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.4.30 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations
Platform
CentOS release 6.4 (Final)
syslog-ng #version:3.2
PS
Syslog-ng to Kafka Integration : Please let me know if anybody has tried this which will render my java Kafka producer redundant
when you use the flags(no-parse) option in syslog-ng, then syslog-ng does not try to parse the different fields of the message, but puts everything into the MESSAGE field of the incoming log message, and prepends a syslog header. To remove this header, use a template in your syslog-ng destination:
template t_msg_only { template("${MSG}\n"); };
destination loghost {
tcp("*.*.*.*" port(5140) template(t_msg_only) );
}
To use the Kafka destination of syslog-ng, you need a newer version of syslog-ng (I'd recommend 3.8 or 3.9). Peter Czanik has written a detailed post about installing new syslog-ng rpm for CentOS.
Running Puppet v2.7.14 on CEntOs 6 and also using Apache/Passenger instead of WEBrick. I was told that puppetmaster service is not required to be running (hence: chkconfig off puppetmaster) running when using httpd and passenger but in my case, if I don't start puppetmasterd manually, none of the agents can connect to the master. I can start httpd just fine and 'passenger' seems to start okay as well. This is my apache configuration file:
# /etc/httpd/conf.d/passenger.conf
LoadModule passenger_module modules/mod_passenger.so
<IfModule mod_passenger.c>
PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.12
PassengerRuby /usr/bin/ruby
#PassengerTempDir /var/run/rubygem-passenger
PassengerHighPerformance on
PassengerUseGlobalQueue on
PassengerMaxPoolSize 15
PassengerPoolIdleTime 150
PassengerMaxRequests 10000
PassengerStatThrottleRate 120
RackAutoDetect Off
RailsAutoDetect Off
</IfModule>
Upon restart, I see these in the httpd_error log:
[Sat Jun 09 04:06:47 2012] [notice] caught SIGTERM, shutting down
[Sat Jun 09 09:06:51 2012] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Sat Jun 09 09:06:51 2012] [notice] Digest: generating secret for digest authentication ...
[Sat Jun 09 09:06:51 2012] [notice] Digest: done
[Sat Jun 09 09:06:51 2012] [notice] Apache/2.2.15 (Unix) DAV/2 Phusion_Passenger/3.0.12 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations
And passenger-status prints these info on the screen:
----------- General information -----------
max = 15
count = 0
active = 0
inactive = 0
Waiting on global queue: 0
----------- Application groups -----------
But still, as I said, none of my agents can actually talk to the master until I start puppetmasterd manually. Does anyone know what am I still missing? Or, is this the way it supposed too be? Cheers!!
It sounds like you may be missing an /etc/httpd/conf.d/puppetmaster.conf file that's based on https://github.com/puppetlabs/puppet/blob/master/ext/rack/files/apache2.conf
Without something like this, you're missing the glue that'll map port 8140 to rack-based pupeptmastd.
See http://docs.puppetlabs.com/guides/passenger.html
https://github.com/puppetlabs/puppet/tree/master/ext/rack
http://www.modrails.com/documentation/Users%20guide%20Apache.html#_deploying_a_rack_based_ruby_application_including_rails_gt_3
After a few days of banging head, now it's running. The main problem was with port number - the puppetmaster was running on different port than puppet agent trying to communicate on.
Another thing is: RackAutoDetect On must be executed before the dashboard vhost file. My So, I renamed passenger vhost file as: 00_passenger.conf to make sure it runs first. After that I get puppetmaster running using Apache/Passenger. Cheers!!
I have just purchased a license for Zend Studio 9. I have only a minimal amount of experience with the Zend framework, and no previous experience with Zend Studio. I am using http://framework.zend.com/manual/en/ as a tutorial on the framework and have browsed through the resources located at http://www.zend.com/en/products/studio/resources for help with the studio software.
My main problem is that after creating a new Zend project with zstudio, I'm not seeing the initial welcome message. Here are the steps I am using:
I've already installed the Zend Server and confirmed that web apps are working (made some test files, they all parsed correctly).
Create a new project with Zend Studio.
a. File->New->Local PHP Project
b. For location, I am using C:\Program Files\Zend\Apache2\htdocs.
c. For version I used the default "Zend Framework 1.11.11 (Built-in)"
I go to http://localhost:81/projectname. Instead of the default index controller being called, I just see my directory structure.
Addition info:
OS: Windows 7
PHP version: 5.3
ERROR LOGS:
>[Wed Nov 30 14:32:30 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
>[Wed Nov 30 14:32:30 2011] [warn] pid file C:/Program Files (x86)/Zend/Apache2/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run?
>[Wed Nov 30 14:32:30 2011] [notice] Digest: generating secret for digest authentication ...
>[Wed Nov 30 14:32:30 2011] [notice] Digest: done
>[Wed Nov 30 14:32:31 2011] [notice] Apache/2.2.16 (Win32) mod_ssl/2.2.16 OpenSSL/0.9.8o configured -- resuming normal operations
>[Wed Nov 30 14:32:31 2011] [notice] Server built: Aug 8 2010 16:45:53
>[Wed Nov 30 14:32:31 2011] [notice] Parent: Created child process 13788
>[Wed Nov 30 14:32:32 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache]
>[Wed Nov 30 14:32:32 2011] [notice] Digest: generating secret for digest authentication ...
>[Wed Nov 30 14:32:32 2011] [notice] Digest: done
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Child process is running
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Acquired the start mutex.
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Starting 64 worker threads.
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Starting thread to listen on port 10081.
>[Wed Nov 30 14:32:33 2011] [notice] Child 13788: Starting thread to listen on port 81.
If you navigate to http://localhost:81/projectname/index/index does the correct screen load?
If so:
Check that the .htaccess file in your public directory contains the correct rewrite rules for Zend Framework.
Check your httpd.conf file and make sure index.php is added to the DirectoryIndex directive.
I think the solution is going to be the second bullet, but let me know what you find and I can help further if that doesn't work. Make sure to restart apache after you make any changes to httpd.conf.
Otherwise, report any errors you see when you access the controller directly, and check Apache's error_log file to see if you get any errors.