Getting this message using wget to retrieve data from urls.
I've successfully patched multiple RHEL7 machines using the method on RHEL bugzilla, however I'm unable to successfully update a couple of older CentOS 6 machines.
On the CentOS 6 machine, per this tweet, I've copied the blacklist file and then ran update-ca-trust extract, but continue getting "certificate has expired"
The temp workaround is adding --no-check-certificate to the wget calls, since these are local trusted URLs.
Is there a better solution in place for EL6 systems short of rebuilding servers with a newer OS?
Related
When I update a file, like I sent via ftp an update of an existing file, in windows the clients continue to see the previous date and time, I predict to stop and start the smb and nmb service for the clients to see the update. samba-4.10.16-20.el7_9.x86_64 and CentOS Linux release 7.9.2009 (Core), kernel 3.10.0-1160.83.1.el7.x86_64
Update samba and kernel packages
Several years ago I wrote a Perl CGI script that connects to an openLDAP server and starts TLS when available.
The script was running successfully with openLDAP-2.4.41 of SLES12 SP5 without a problem, but after updating some packages, the script cannot start TLS using $ldap->start_tls any more.
The error message is:
"Failed to set SSL cipher list error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list: no cipher match"
The updates installed were (AFAIK) openldap2-2.4.41-22.5.1 together with libopenssl1_1-1.1.1d-2.54.1. Specifically there was no update for the Perl LDAP modules.
My code does not specify a ciphers list, but is specifies the CA path and used require for certificate verification.
The part of the code that outputs the error message is:
$msg = $ldap->start_tls(%options);
if ($msg->code()) {
perr($q 'start_TLS() failed: ', $msg->error);
}
A truly odd thing is that a simple test case started on a different server (that should have the same software) succeeds with cipher AES256-GCM-SHA384.
Even when I run that script on the same server, it succeeds with the same cipher.
The code used in the test case basically is:
my $m = $l->start_tls(verify => 'verify');
$m->code() || print $l->cipher(), "\n";
While looking closer, I noticed that my CA path is /etc/ssl/certs which is a symbolic link to /var/lib/ca-certificates/pem updated about the same time as the other RPM packages.
Even when changing the CA path in the CGI to /var/lib/ca-certificates/pem I get the same error.
The web server being used was updated with the other packages, too; it it apache2-2.4.51-35.7.1.
The Perl code runs with PerlResponseHandler ModPerl::Registry, and the apache RPM changelog says it was "build against openssl 1.1".
What might be wrong or causing this, and how can I fix it?
As my CGI script magically worked again after having installed current SLES 12 SP5 updates, I must assume that there was a bad update causing the failure.
Suspecting that the packages causing that might have been apache2, perl, or openssl, I try to find the bad versions:
apache2-2.4.51-35.7.1
libopenssl1_1-1.1.1d-2.54.1
were probably the bad versions, while
libopenssl0_9_8-0.9.8j-106.33.1
apache2-2.4.51-35.13.1
openssl1_0_0-1.0.2p-3.48.1
fixed the problem again.
I tried to create a cluster in Openshift 4.2 (RHCOS) environment. I have own local DNS server and HA proxy server. I created 3 master and 2 worker nodes in VMware Environment as per the documentation. At the end of the new cluster creation I'm getting an error :
Unable to connect to the server: x509: certificate has expired or is
not yet valid
Does anyone have an idea why am I getting this error?
It's an ignition file problem.When we create a ignition file, we have to finish the installation with in 24 hours.Because the ignition files contains certificate and it will expires in 24 hours.
You must delete all install folder. Not enough to delete inside of install folder. Because install folder contains hidden file and those files creates old ignition files.
Same problem.
But just regenerating ignition config files did not fix the problem.
To fix it I had to alto delete the auth/ subfolder & regenerate.
Folks, how do I make sure all files of a RPM (CENTOS) where removed? The problem is that I've installed a software called shiny-proxy https://www.shinyproxy.io/ - and after 3 days running as a test server, we received a message called NetScan Detected from Germany. Now we want to clean everything up removing the RPM but it seams not that easy as something else is left on the system that continues to send and receive lots of packages (40kps). I really apologize shinyproxy folks if that is not part of their work, so far this is the last system under investigation.
your docker API is bound to your public IP and therefore directly reachable from an external network. You should not do this as it would allow anybody to run arbitrary docker instances and even commands on your docker host.
You should secure your docker install:
- bind it to 127.0.0.1 (lo) interface and adapt the shinyproxy yml file accordingly
- setup TLS mutual auth (client certificate) on the docker API (it is supported by shinyproxy)
I have successfully mounted and used NFS Version 4 having Solaris server and FreeBSD client.
Problem is when having FreeBSD server and FreeBSD client at version 4. Version 3 works excellent.
I use FreeBSD NFS server since FreeBSD verson 4.5 (then having IBM AiX clients).
The problem:
mounts OK, but there are no principals appear at the kerberos cache, and when trying to read or write on the mounted filesystem I get the error: Input/output error
nfs/server-fqdn#REALM and nfs/client-fqdn#REALM principals are created at kerberos server and stored at keytab files properly at both sides.
I issue tgt tickets from the KDC using the above for both sides for the root's kerberos cache.
I start services properly:
file /etc/rc.conf
rpcbind_enable="YES"
gssd_enable="YES"
rpc_statd_enable="YES"
rpc_lockd_enable="YES"
mountd_enable="YES"
nfsuserd_enable="YES"
nfs_server_enable="YES"
nfsv4_server_enable="YES"
then I start services
at client: rpcbind, gssd, nfsuserd,
at server all above having the exports file:
V4: /marble/nfs -sec=krb5:krb5i:krb5p -network 10.20.30.0 -mask 255.255.255.0
I mount:
# mount_nfs -o nfsv4 servername:/ /my/mounted/nfs
#
# mkdir /my/mounted/nfs/e
# mkdir: /my/mounted/nfs/e: Input/output error
#
Same result for even an ls command.
klist does not show any new principals at root's cache, or any other cache.
The amazing performance at version 3 I love, but need local lock files feature of NFS4.
Second reason is security. I need kerberised RPC calls (-sec=krbp).
If anyone of you has achieved this using FreeBSD server for NFS Version 4, please give a feedback to this question, I'll be glad if you do.
Comments are not good to give code examples. Here is the setup of FreeBSD client and FreeBSD server that works for me. I don't use Kerberos but if you make it working with this minimal configuration then you can add Kerberos afterwards (I believe).
Server rc.conf:
nfs_server_enable="YES"
nfs_server_flags="-u -t -n 4"
nfsv4_server_enable="YES"
nfsuserd_enable="YES"
mountd_flags="-r"
Server /etc/exports:
/parent/path1 -mapall=1001:1001 192.168.2.200
/parent/path2 -mapall=1001:1001 192.168.2.200
... (more shares)
V4: /parent/ -sec=sys 192.168.2.200
Client rc.conf:
nfs_client_enable="YES"
nfs_client_flags="-n 4"
rpc_lockd_enable="YES"
rpc_statd_enable="YES"
Client fstab:
192.168.2.100:/path1/ /mnt/path1/ nfs rw,bg,late,failok,nfsv4 0 0
192.168.2.100:/path2/ /mnt/path2/ nfs rw,bg,late,failok,nfsv4 0 0
... (more shares)
As you see the client tries to mount only what's after the /parent/ path specified in the V4 line on the server. 192.168.2.100 is server IP and 192.168.2.200 is the client IP. This setup will only allow that one client connect to the server.
I hope I haven't missed anything. BTW please rise questions like this on SuperUser or ServerFault rather than StackOverflow. I am surprised this question hasn't been closed yet because of that ;)