InvalidSignatureException when calling EKS cluster - kubernetes

I am using ubuntu app on windows. Using this ubuntu app, I was connecting to EKS cluster till yesterday.
Suddenly, it stopped working due to below error. I searched lot and found many answers related to ntpd and system clock but not sure why this has happened now.
This is the error:
An error occurred (InvalidSignatureException) when calling the DescribeCluster operation: Signature expired: 20230219T082231Z is now earlier than 20230219T084848Z (20230219T085348Z - 5 min.)
how to fix this?
I see that system clock is correct on my windows machine but is is not getting synced correctly with ubuntu app. what commands I should run to make this successful?

Related

SCOM agent heartbeat failed on solaris 11.4

i am using Operations Manager 2016 for monitoring a host with Solaris 11.4 OS.
after several hours (24 h), host state is changed to gray and i get this text "The Run As account does not exist on the UNIX/Linux Server. " but Run As account is valid.
This problem is fixed each time by reset SCOM agent on host. I get this error over and over again.
also service log in /var/svc/log/application-management-omid:default.log is normal.
thanks.
I finally found the solution to this problem.
In my case, this problem is due to the lack of a lsass.so library in the system.
The problem was resolved after updating this library and modifying the PAM Configuration file.
thanks.

Error creating database in Cloudera Impala (Virtual machine)

I have downloaded and started the cloudera virtual machine with impala. At the time of executing the database creation statement, an error related to the catalog and state-store service appeared. Perform the service update from console, however when trying to create a database the following message appears
Could not connect to quickstart.cloudera:21050 (code THRIFTTRANSPORT): TTransportException('Could not connect to quickstart.cloudera:21050',)
I have restarted the following services, but the problem persists: impala-catalog impala-state-store impala-server
Any idea what the problem may be?
The problem was solved alone. Apparently the virtual machine takes longer than I expected to start all services. Once I let it start and wait for a reasonable time, it worked without problems.

TwinCAT: Running on isolated cores failed

I was trying to activate my configuration on my local PC, but it failed. I tried:
Isolate 1 or 2 cores on my pc (Under SYSTEM > Real-Time and reboot the PC) and run the PLC tasks on those cores. When I do this I get the following error:
'TwinCAT System' (10000): Sending ams command >> Init4\RTime: Start Interrupt: Ticker started >> AdsWarning: 4118 (0x1016, RTIME: startup of isolated CPU fails!) << failed!
I then tried to run it on the normal windows dedicated CPUs (so none of the CPU’s were isolated). When I activated the configuration (and enabled Virtualization in the BIOS) I got the following error message:
Setting TwinCAT in Run Mode with KB4056894 is not possible
Uninstall KB4056894
or
Activate a solution using only isolated cores
I could not find KB4056894 installed on my PC. Any other solution?
I'm using TwinCAT 3 Build 4022.14 under Windows 10.
From Beckhoff support:
According to the error note, the Microsoft patch for spectre/meltdown
is installed on your PC. Normally, the TC3 should work with this patch
when using isolated cores…
However, since version TC3 Build 4022.16, this problem is solved.
I installed 4022.22 and everything worked.
I just want to share my experience with this error and how I solved it. Just in real-time menu set the cpu cores as 1 shared and 3 isolated cores. since my cpu has 4 core. Then set this value on target and then it will ask for reboot. after reboot it worked without this error and I was able to run the my code.

ConnectionPool::PoolShuttingDownError thrown once in a while by application_controller rails using Mongodb replicaSet

I have a RoR application running on two different server. They run the same version of the app and are similar in configuration. I have a Mongodb replica set running on both the server with a third server as an arbitrary server.
Everything runs fine. The data is syncing perfectly. But after 2 weeks of running, one of the server started giving ConnectionPool::PoolShuttingDownError. I checked the log and I can see the error was raised application controller. I didn't change any code on any of the server.
The server raising the error is fine till it gets 6-7 simultaneous request. Or when you refresh the page 6-7 times together. It gives this error once and then again you refresh the page and its back to normal. This is weird and I can't understand why one server has this problem while the other one don't and that too sometime.
I am using Mongoid with Moped, Rails 4.1.0 and Ruby 2.1.5. I also checked available connections using db.serverStatus().connections which is around 51158 and ulimit of max process is 257185.
I searched a lot but I am still unsure of the cause of this problem. It will be great if someone can put some light on this issue. Any help will be appreciated. Thanks in advance.

PostgreSQL 8.3.7: "FATAL: could not reattach to shared memory" and "WARNING: worker took too long to start; cancelled"

We record our office IP phone activity using Xima software's Chronicall, which uses a PostgreSQL backend. The server on which both of these are installed is an ESXi 5.5 VM running Windows Server Standard 2008 SP1. For some time now, we have been getting the following PostgreSQL errors in Windows event viewer:
"FATAL: could not reattach to shared memory (key=248,
addr=02510000): 487"
"WARNING: worker took too long to start; cancelled"
These errors occur every hour or two, and always occur back-to-back in the order listed above.
Xima support has looked at the issue multiple times and has not been able to resolve it. Upon their recommendation, I have upgraded Java, disabled antivirus, and run the Windows Memory Diagnostic Tool (came back clean), but the errors persist. Xima has specifically stated that PostgreSQL should not be updated, as versions above 8.3.7 are known to cause other issues with Chronicall.
Any other suggestions to resolve this issue?
I would say the company (Xima) is at fault here, since PostgreSQL 8.3.7 is hopelessly outdated.
Quoting the official Versioning policy of Postgres:
Postgres 8.3 has reached EOL in Februar 2013
Moreover, Postgres strongly recommends:
We always recommend that all users run the latest available minor
release for whatever major version is in use.
The latest point release of 8.3 is 8.3.23.
Version 8.3.7 is just not right.
Running Postgres on a Windows Server 2008 VM wouldn't be my first idea either...