I recently made an Minecraft Server on Openshift with this Tutorial.
After I made the port forwarding, I was able do get a Connection in Minecraft to my Server. But I cannot login! It simply ends with the Message: Timed-Out.
In the Server logs I only see, what I know: I lost my connection. Here are the logs:
2016-01-01 17:29:17 [INFO] Starting minecraft server version 1.
2016-01-01 17:29:17 [INFO] Loading properties
2016-01-01 17:29:17 [INFO] Default game type: SURVIVAL
2016-01-01 17:29:17 [INFO] Generating keypair
2016-01-01 17:29:18 [INFO] Starting Minecraft server on 127.2.1
2016-01-01 17:29:18 [INFO] Preparing level "world"
2016-01-01 17:29:18 [INFO] Preparing start region for level 0
2016-01-01 17:29:20 [INFO] Preparing spawn area: 52%
2016-01-01 17:29:20 [INFO] Done (1.911s)! For help, type "help"
2016-01-01 17:30:41 [SEVERE] Reached end of stream
2016-01-01 17:30:41 [INFO] /127.2.105.129:29361 lost connection
java.io.IOException: Bad packet id 72
at ei.a(SourceFile:193)
at ci.i(SourceFile:250)
at ci.c(SourceFile:16)
at cj.run(SourceFile:94)
2016-01-01 18:10:21 [INFO] /127.2.105.129:32075 lost connection
java.io.IOException: Bad packet id 72
at ei.a(SourceFile:193)
at ci.i(SourceFile:250)
at ci.c(SourceFile:16)
at cj.run(SourceFile:94)
2016-01-01 18:10:21 [INFO] /127.2.105.129:32098 lost connection
java.io.IOException: Bad packet id 72
at ei.a(SourceFile:193)
at ci.i(SourceFile:250)
at ci.c(SourceFile:16)
at cj.run(SourceFile:94)
java.io.IOException: Bad packet id 72 deals will a malformed packet. I've read where it could mean anything from not specifying the correct port to trying to connect with incompatible mod versions.
In your case, it looks like you are explicitly defining the server-ip in your server.properties. In the starting logs:
2016-01-01 17:29:18 [INFO] Starting Minecraft server on 127.2.1
127.2.1 is not a valid address, so it makes sense to me that packet transmission would be malformed. Leave this field blank (as it is by default) or provide a valid IPv4 address.
I feel pretty confident that is what is going on because the tutorial your provided makes you set this field as well.
Bad packet id 72 means that you are trying to login to a server that's port isn't configured correctly. in the log, you have mentioned 2016-01-01 17:29:18 [INFO] Starting Minecraft server on 127.2.1.
solution:
(I have configured my test server to similar to yours)
1. go to server.properties and in server-ip:section, remove the content after that. That will make it the default ip
go to your router home page and port forward the server to 25565
it should now work
Good luck on your new server! :)
Related
I am having trouble connecting to libera.chat and irc.libera.chat using Konversation Version 1.8.21123 on Jammy Jellyfish (fully updated). I have worked through the steps given on https://userbase.kde.org/Konversatio...tication#step5 and still cannot connect. The repeating log is shown below.
[12:44] [Info] Looking for server irc.libera.chat (port 6697)...
[12:44] [Info] Server found, connecting...
[12:44] [Info] Negotiating capabilities with server...
[12:44] [Notice] -lithium.libera.chat- *** Checking Ident
[12:44] [Notice] -lithium.libera.chat- *** Looking up your hostname...
[12:44] [Notice] -lithium.libera.chat- *** Couldn't look up your hostname
[12:45] [Notice] -lithium.libera.chat- *** No Ident response
[12:45] [Capabilities] account-notify away-notify chghost extended-join multi-prefix sasl=PLAIN,ECDSA-NIST256P-CHALLENGE,EXTERNAL tls account-tag cap-notify echo-message server-time solanum.chat/identify-msg solanum.chat/oper solanum.chat/realhost
[12:45] [Info] Requesting capabilities: account-notify away-notify chghost extended-join multi-prefix sasl cap-notify server-time
[12:45] [Info] SASL capability acknowledged by server, attempting SASL PLAIN authentication...
[12:45] [Error] SASL authentication attempt failed.
[12:45] [Info] Closing capabilities negotiation.
[12:45] [Error] Connection to server irc.libera.chat (port 6697) lost: The TLS/SSL connection has been closed.
[12:45] [Info] Trying to reconnect to irc.libera.chat (port 6697) in 10 seconds.
[12:45] [Info] Looking for server irc.libera.chat (port 6697)... <-- Log repeats from this line.
Is there something blatant that I have overlooked ?
Is there some web page that I need to visit in order to register my ident/hostname/whatever (!) ?
Stuart
I'm a bit confuse on how are handle TCP probe with Kubernetes the documentation says:
A third type of liveness probe uses a TCP socket. With this
configuration, the kubelet will attempt to open a socket to your
container on the specified port. If it can establish a connection, the
container is considered healthy, if it can't it is considered a
failure.
source
But has far as I known, socket client is connected before the server perform accept on the socket. This TCP handshake is managed by the OS... So how Kubernetes "known" the state of the socket?
To give a little be of context I'm trying to write a unit test in my application (C++) and I cannot figure how K8s handle this, but in k8s it does work as expected (I mean that if I do not accept the connection it will declare my container as not alive).
Thank you for your time and consideration!
Edit 1
Sorry #Steffen Ullrich it take me some time but here a sample of code: https://github.com/quentingodeau/k8s-probe
And then the traces that I get:
$ kubectl logs -f $(kubectl get pods | egrep -o 'sample-deployment-[^ ]*')
[2021-07-10 18:46:22.837] [info] Server acccept the client...
[2021-07-10 18:46:23.838] [info] Server acccept the client...
[2021-07-10 18:46:24.840] [info] Server acccept the client...
[2021-07-10 18:46:25.837] [info] Server acccept the client...
[2021-07-10 18:46:26.836] [info] Server acccept the client...
[2021-07-10 18:46:27.839] [info] Server acccept the client...
[2021-07-10 18:46:28.840] [info] Server acccept the client...
[2021-07-10 18:46:29.836] [info] Server acccept the client...
[2021-07-10 18:46:30.843] [info] Server acccept the client...
[2021-07-10 18:46:31.028] [info] Send SIGUSR1
[2021-07-10 18:46:31.836] [info] Server acccept the client...
[2021-07-10 18:46:31.836] [info] Start to not procssing incoming connection
[2021-07-10 18:46:35.855] [info] End of application (signal=15)
Edit 2
https://github.com/kubernetes/kubernetes/issues/103632
But has far as I known, socket client is connected before the server perform accept on the socket
While it is true that the connection might be established in the OS before accept is called, it is only established after listen is called on the socket. If the application is not running (failed to start, crashed) then there is no listening socket so any connection to it will fail. If the listen queue is full since the application fails to handle new connections in time, then the connection will fail too.
This kind of cheap probe is sufficient in many cases but it surely does not handle every case, like making sure that the application responds correctly and responds within the expected time. If such checks are needed more elaborate probes and maybe even application specific probes need to be done.
I'm doing my first kaa application, i have a stuck at Retrieve collected data step.
I have build my client project, kaa-app run as below:
viettq#viettq:~/Documents/workspace/kaa_example/build$
viettq#viettq:~/Documents/workspace/kaa_example/build$
viettq#viettq:~/Documents/workspace/kaa_example/build$
viettq#viettq:~/Documents/workspace/kaa_example/build$
viettq#viettq:~/Documents/workspace/kaa_example/build$
viettq#viettq:~/Documents/workspace/kaa_example/build$
viettq#viettq:~/Documents/workspace/kaa_example/build$ ./kaa-app
Default sample period: 1 seconds
Sampled temperature: 33
2017/02/17 2:29:10 [WARNING] [kaa_bootstrap_manager.c:612] (-7) - Could not find next Bootstrap access point (protocol: id=0x56C8FF92, version=1)
2017/02/17 2:29:10 [ERROR] [kaa_tcp_channel.c:307] (-7) - Kaa TCP channel [0x929A2016] error notifying bootstrap manager on access point failure
2017/02/17 2:29:10 [ERROR] [kaa_client.c:240] (-7) - Failed to process OUT event for the client socket 3
Sampled temperature: 30
Sampled temperature: 32
Sampled temperature: 26
2017/02/17 2:29:13 [WARNING] [kaa_bootstrap_manager.c:612] (-7) - Could not find next Bootstrap access point (protocol: id=0x56C8FF92, version=1)
2017/02/17 2:29:13 [ERROR] [kaa_tcp_channel.c:307] (-7) - Kaa TCP channel [0x929A2016] error notifying bootstrap manager on access point failure
2017/02/17 2:29:13 [ERROR] [kaa_client.c:240] (-7) - Failed to process OUT event for the client socket 3
Sampled temperature: 31
Sampled temperature: 32
Sampled temperature: 31
Some data send to kaa sandbox server.
My ssh to kaa sandbox server as below:
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$
kaa#kaa-sandbox.kaaproject.org:~$ mongo kaa
MongoDB shell version: 2.6.1
connecting to: kaa
> db.logs_80610364736216152939.find()
> db.logs_80610364736216152939.find()
> db.logs_80610364736216152939.find()
>
[2]+ Stopped mongo kaa
kaa#kaa-sandbox.kaaproject.org:~$ mongo kaa
MongoDB shell version: 2.6.1
connecting to: kaa
> db.logs_80610364736216152939.find()
>
>
>
Nothing in mongoDB shell.
I done everything full compilance the Kaa official tutorial
http://kaaproject.github.io/kaa/docs/v0.10.0/Programming-guide/Your-first-Kaa-application/
But i retrieve nothing from mongoDB shell.
Please help me slove it.
Thank advance!
From the logs you provided:
2017/02/17 2:29:10 [WARNING] [kaa_bootstrap_manager.c:612] (-7) - Could not find next Bootstrap access point (protocol: id=0x56C8FF92, version=1)
2017/02/17 2:29:10 [ERROR] [kaa_tcp_channel.c:307] (-7) - Kaa TCP channel [0x929A2016] error notifying bootstrap manager on access point failure
2017/02/17 2:29:10 [ERROR] [kaa_client.c:240] (-7) - Failed to process OUT event for the client socket 3
The application can`t send information to Kaa Sandbox server. Check that you correctly created all schemas and log appender. Check that you can connect to the server.
Please also check the Sandbox configuration for the Kaa host on the Management page. There should be set the PC host (which runs the Kaa Sandbox VM) machine's IP address accessible from the host you are running the application.
Note that each time the Kaa host setting is changed, you should re-generate Kaa SDK and re-build your application with that new SDK files. Otherwise, the application might fail connecting to the Kaa operations service.
I've installed a default out of the box FreeSwitch instance but when I try to make an internal call (extension to extension) it take around 12 seconds before the call is established and I can hear the ring tone.
When I look at the log I see the connection request almost instantly but then no activities and after 10 seconds or more the call starts and I hear the phone ringing.
Here is the log file it it helps, please see the 10 seconds delay between 130:08:07 to 13:08:17.
freeswitch#vps-1170411-23979.manage.myhosting.com> 2015-09-26 13:07:41.591949 [CONSOLE] mod_voicemail.c:4091 Event Thread Started
2015-09-26 13:08:02.171949 [NOTICE] switch_channel.c:1075 New Channel sofia/internal/1001#168.144.85.16 [25229804-6471-11e5-9558-f1a7477c5309]
2015-09-26 13:08:07.331948 [INFO] mod_dialplan_xml.c:635 Processing BSmarter.CA <1001>->1000 in context default
2015-09-26 13:08:07.331948 [CRIT] mod_dptools.c:1670 WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING
2015-09-26 13:08:07.331948 [CRIT] mod_dptools.c:1670 Open /usr/local/freeswitch/conf/vars.xml and change the default_password.
2015-09-26 13:08:07.331948 [CRIT] mod_dptools.c:1670 Once changed type 'reloadxml' at the console.
2015-09-26 13:08:07.331948 [CRIT] mod_dptools.c:1670 WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING
2015-09-26 13:08:17.371961 [INFO] switch_ivr_async.c:3932 Bound B-Leg: *1 execute_extension::dx XML features
2015-09-26 13:08:17.371961 [INFO] switch_ivr_async.c:3932 Bound B-Leg: *2 record_session::/usr/local/freeswitch/recordings/1001.2015-09-26-13-08-17.wav
2015-09-26 13:08:17.371961 [INFO] switch_ivr_async.c:3932 Bound B-Leg: *3 execute_extension::cf XML features
2015-09-26 13:08:17.371961 [INFO] switch_ivr_async.c:3932 Bound B-Leg: *4 execute_extension::att_xfer XML features
2015-09-26 13:08:17.391951 [NOTICE] switch_channel.c:1075 New Channel sofia/internal/1000#99.226.75.129:63329 [2e34333a-6471-11e5-957b-f1a7477c5309]
2015-09-26 13:08:17.571984 [NOTICE] sofia.c:6760 Ring-Ready sofia/internal/1000#99.226.75.129:63329!
2015-09-26 13:08:17.591949 [INFO] switch_ivr_originate.c:1193 Sending early media
2015-09-26 13:08:17.591949 [INFO] switch_core_media.c:5395 Activating RTCP PORT 4003
2015-09-26 13:08:17.591949 [NOTICE] sofia_media.c:92 Pre-Answer sofia/internal/1001#168.144.85.16!
2015-09-26 13:08:18.631986 [NOTICE] sofia.c:7580 Hangup sofia/internal/1001#168.144.85.16 [CS_EXECUTE] [ORIGINATOR_CANCEL]
Any idea what the problem might be?
This pause was introduced in order to force the people to change the default password. Just edit it in vars.xml and the delay should go away.
Just Like
Stanislav Sinyagin edit the /conf/vars.xml and change the default password 1234 to new password.
this is the only way to stop that delay
Does anyone have any experience working with DeepDive? It involves installing Java, Python 2.x, PostgreSQL, and SBT, then the DeepDive package. I'm not very familiar with PostgreSQL, but I'm intending to learn these simultaneously.
I'm working on Ubuntu 12.04 and PostgreSQL 9.1. I made a superuser for PostgreSQL using the command in the shell createuser tom. It's worth noting that my Ubuntu username is also tom. I then changed the password for tom with the following:
$su - postgres
$psql
--> ALTER USER tom WITH password 'pa$$w0RD';
DeepDive comes with a test script, which gives me the following error (I'm not including all the other text, which doesn't include errors).
[info] LogisticRegressionApp:
[info] - should work *** FAILED ***
[info] org.postgresql.util.PSQLException: FATAL: password authentication failed for user "tom"
[info] at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:398)
[info] at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:173)
[info] at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:64)
[info] at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:136)
[info] at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:29)
[info] at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:21)
[info] at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:31)
[info] at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24)
[info] at org.postgresql.Driver.makeConnection(Driver.java:393)
[info] at org.postgresql.Driver.connect(Driver.java:267)
[info] ...
Then at the end:
[info] Tests: succeeded 68, failed 2, canceled 0, ignored 0, pending 3
[info] *** 2 TESTS FAILED ***
[error] Failed tests:
[error] org.deepdive.test.integration.LogisticRegressionApp
[error] org.deepdive.test.unit.InferenceManagerSpec
[error] Error during tests:
[error] org.deepdive.test.unit.PostgresInferenceDataStoreSpec
[error] org.deepdive.test.unit.PostgresExtractionDataStoreSpec
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 10 s, completed Mar 17, 2014 8:51:47 PM
If anyone can point me in some direction, I'd appreciate it.
OK, I fixed part of the problem, but this led to a different problem. Here's what I did. test.sh contains the following lines:
export PGUSER=${PGUSER:-`whoami`}
export PGPASSWORD=${PGPASSWORD:-}
which I changed to
export PGUSER=tom
export PGPASSWORD=pa$$w0rd
Now the test proceeds farther, and gets to the point where it prints the following:
06:49:40.953 [default-dispatcher-7][$a][LocalActorRef] INFO Message [org.deepdive.calibration.CalibrationDataWriter$WriteCalibrationData] from Actor[akka://deepdive/temp/$a] to Actor[akka://deepdive/user/inferenceManager/$a#-1669803870] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
06:49:40.955 [default-dispatcher-7][$a][LocalActorRef] INFO Message [akka.actor.PoisonPill$] from Actor[akka://deepdive/user/inferenceManager#-354953956] to Actor[akka://deepdive/user/inferenceManager/$a#-1669803870] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
06:49:40.957 [default-dispatcher-5][inferenceManager][InferenceManager$PostgresInferenceManager] INFO Starting
06:49:40.958 [default-dispatcher-6][factorGraphBuilder][FactorGraphBuilder$PostgresFactorGraphBuilder] INFO Starting
06:50:06.679 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$d][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:06.699 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$e][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:06.709 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$f][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:06.738 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$g][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:06.759 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$h][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:06.780 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$i][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:06.799 [TaskManagerSpec-scheduler-1][akka://TaskManagerSpec/user/$$j][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
06:50:07.396 [default-dispatcher-5][taskManager][TaskManager] INFO Memory usage: 233/982MB (max: 982MB)
And this continues ad infinitum. The key seems to be the first line, about the message not being delivered between the two Actors.
As I noted in a comment below, I checked out the postgresql.conf file, and uncommented the following line
listen_addresses = 'localhost'
listen on;
It resolved one of the original errors, but not the second error.
In item 2 of Patrick's response, here are the parameters from the pg_hba.conf file:
# Database administrative login by Unix domain socket
local all postgres peer
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
Doesn't the part local all all support all local connections?
The error you mention can have multiple causes:
Have you modified postgresql.conf to accept incoming TCP/IP connections? Check the listen_addresses parameter.
Have you modified pg_hba.conf? Here you need setup an authentication method for DeepDive and/or the jdbc driver definition.
Lastly, can DeepDive connect to the database it intends to connect to with the credentials you have supplied it (or the jdbc driver definition)?
Both of the configuration files are in your $PGDATA directory, typically /etc/postgresql/9.3/main.
Note that psql logs on using the unix sockets by default (unless you specify -h host_ip) and jdbc uses a TCP/IP connection. Try psql over TCP/IP to see if that works. If not, work on 1, then 2. If it does, work on 2, then 3.