I am presently using a nice tool to perform performance testing (load) for a project via a VPN. I am using K6.io and SOAPUI. run the same tool using both tools just to compare results. K6.io is a Javascript library that gathers robust configurable metrics, and likewise SOAPUI. The challenge is that SOAPUI works smoothly while K6.io always hit a certificate issue. Since SOAPUI works fine I believe the problem is either certificate issue with my system or K6.io. I'm yet to figure out what the real problem is.
The main reason why I like k6.io is because of the robust reporting tools it integrates with like InfluxDB and Grafana. It is because I have to generate graphical reports for the test.
Error from k6.io CMD
script: AccountClosure.js
output: -
scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
* default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)
WARN[0002] Request Failed error="Post \"https://mysite.behindvpn.com:4333/fiwebservice/services/FIPWebService\": x509: certificate signed by unknown authority"
d
running (00m01.8s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs 00m01.4s/10m0s 1/1 iters, 1 per VU
If you don’t want to run with --insecure-skip-tls-verify 9, I think your only option is to add the root CA certificate to your local store.
On Linux this would involve the ca-certificates package and copying your cert to the correct location. This will be system dependent, but see the instructions for Ubuntu 5, otherwise consult your OS documentation.
Ivan
https://community.k6.io/t/x509-certificate-signed-by-unknown-authority/1057/2?u=ken4ward
Related
Having fight with weird issue.
I one of my env vault is not able to work stable with etcd as storage.
So here is story.
I have etcd server 3.5 version installed. Works perfectly with etcdctl tool
When I run on one system.
Ubuntu 20.04.2 LTS
I am having issues with JWT token.
In vault logs I am having
{"level":"warn","ts":"2022-03-29T16:23:27.614Z","logger":"etcd-client","caller":"v3#v3.5.0/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0002b7500/#initially=[https://etcd-server:2379]","attempt":99,"error":"rpc error: code = Unauthenticated desc = etcdserver: invalid auth token"}
But some records vault is able to read. So sometimes it thinks JWT is OK.
When I copy same binary on Fedora 35 and run there I do not have an issue.
From etcd logs I can extract JWT token in both cases. And can verify it using JWT tools.
Both correct and signature is OK as well.
Etcd token is runnning with
name: ETCD_AUTH_TOKEN
value: jwt,priv-key=jwt-token.pem,sign-method=RS256,ttl=10m
Interesting thing that if I will run same on other Fedora 35 box I am having JWT issue as well.
If I am setting ETCD_AUTH_TOKEN into 'simple' then as expected all OS starts working without issues.
So I am really lost why first of it does not work with JWT on all. And second why it really works only on one system.
Binary of vault is static and downloaded from Hashicorp site as iti is. So does not depends on system libs.
Time is synced on all systems.
Will be appreciate any help and ideas.
Thank you
Folks, how do I make sure all files of a RPM (CENTOS) where removed? The problem is that I've installed a software called shiny-proxy https://www.shinyproxy.io/ - and after 3 days running as a test server, we received a message called NetScan Detected from Germany. Now we want to clean everything up removing the RPM but it seams not that easy as something else is left on the system that continues to send and receive lots of packages (40kps). I really apologize shinyproxy folks if that is not part of their work, so far this is the last system under investigation.
your docker API is bound to your public IP and therefore directly reachable from an external network. You should not do this as it would allow anybody to run arbitrary docker instances and even commands on your docker host.
You should secure your docker install:
- bind it to 127.0.0.1 (lo) interface and adapt the shinyproxy yml file accordingly
- setup TLS mutual auth (client certificate) on the docker API (it is supported by shinyproxy)
Not 100% sure this is a Perl issue, but it seems to be. I have an IPN script that connects with PayPal to verify transactions. It was working fine until yesterday, when I installed LWP::Protocol::https. Since then, it's been failing with the error:
Can't connect to www.paypal.com:443 (certificate verify failed)
LWP::Protocol::https::Socket: SSL connect attempt failed error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed at /usr/local/share/perl5/LWP/Protocol/http.pm line 47.
Running GET https://www.paypal.com from bash (which uses LWP) yields the same error message. OTOH, running GET https://www.gmail.com is successful. Running openssl s_client -host paypal.com -port 443 returns (among other things) Verify return code: 0 (ok). curl "https://www.paypal.com/cgi-bin/webscr?cmd=_notify-validate" successfully receives a response from PayPal. So it does seem to be Perl-specific.
Module versions:
LWP 6.13
LWP::Protocol::https 6.06
IO::Socket::SSL 2.015
Mozilla::CA 20141217 (note: I've tried the script both using Mozilla::CA and without it... results have been the same)
Please let me know if there are other relevant modules. Perl version is 5.10.1. Server OS is RHEL 6.
Mozilla::CA 20141217 (note: I've tried the script both using Mozilla::CA and without it... results have been the same)
In short:
I don't know what "without it" means for RHEL6 but please try again with Mozilla::CA 20130114 or with the "older ca-bundle" linked from http://curl.haxx.se/docs/caextract.html.
Details:
The certificate chain you get from www.paypal.com
[0] www.paypal.com
[1] Symantec Class 3 EV SSL CA - G2
[2] VeriSign Class 3 Public Primary Certification Authority - G5
The last certificate in the chain is signed by the 1024 certificate
/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority
Since 1024 bit certificates where removed by Mozilla end of last year you will not find them in the current Mozilla::CA any longer. But browsers don't need the old certificate because the create the trust chain based on certificates [0] and [1] already because they use a built-in certificate instead of the certificate [2] send by the server.
While this newer built-in certificate is also included in Mozilla::CA it will not be used because of a long-standing bug in how OpenSSL validates certificates: it will always try to validate the longest chain and not check if a shorter chain is possible.
For more details about this problem see
the original bug report against OpenSSL from 2012
a more recent description of the problem, which not only affects Perl but also python, ruby, curl, wget... - i.e. everything which uses OpenSSL: https://stackoverflow.com/a/27826829/3081018
The problem can be resolved by using the flag X509_V_FLAG_TRUSTED_FIRST which was introduced with OpenSSL 1.02 (released 4 month ago and probably not in RHEL yet) or by using an even newer and not yet released version of OpenSSL where they finally fixed the problem (see https://rt.openssl.org/Ticket/Display.html?id=3637&user=guest&pass=guest).
The problem can be worked around by having the older 1024 bit CA certificates still available, i.e either using an older Mozilla::CA or CA bundle or using the system CA store which usually includes these older CA. See also:
A current bug report against IO::Socket::SSL to use the X509_V_FLAG_TRUSTED_FIRST by default (if available). This flag gets set with 2.016 (not yet released) but needs a version of Net::SSLeay which exports this flag (not yet released) and OpenSSL 1.02 (not included in RHEL).
A pull request against LWP to use the default CA on the system instead of Mozilla::CA. This would probably save the problem for you too. Note that Debian/Ubuntu have a similar patch included. I don't know about the version of LWP shipped with RHEL.
We're currently writing a tool aimed at checking the validity of credentials over various applications (http, ssh, smb, rdp). No problem for the 3 former. But for RDP, I couldn't find a single way of doing this easily.
The tool is embedded within a web app hosted on a linux box, therefore there is no X Server available.
The only tool I have successfully used to validate RDP credentials from the command line is THC-Hydra, by supplying a single username and password, it works correctly for older versions of RDP servers, of for those where the Network Level Authentication has been lowered.
However, THC-Hydra seems to hang when checking RDP credentials for newest versions of Windows, or where Network Level Authentication has been hardened.
Medusa with a patched version of the rdesktop client fails as well. (some servers require CredSSP, SSL, ...)
There's also nmap's ncrack, but for some reason I only get "READ" timeouts.
EDIT: I got Ncrack to work, however it fails - at least on Windows 2008 R2 (doesn't find credentials even when providing the correct ones).
Any clues to help me?
Cheers
Actually I found a reliable way to do that. It's always when you stop looking for something that you find it :)
Using the super awesome remote desktop client FreeRDP and the "+auth-only" switch. The exit status is 0 when authentication succeeds, 1 otherwise. There also are the error message that you can grep for.
Failed auth:
jrm#deb-jrm:~$ static/xfreerdp /v:10.0.0.1 /cert-ignore /u:MyUser /MyDomain /p:WRONGPASS +auth-only
Authentication only. Don't connect to X.
credssp_recv() error: -1
freerdp_set_last_error 0x20009
Authentication failure, check credentials.
If credentials are valid, the NTLMSSP implementation may be to blame.
Error: protocol security negotiation or connection failure
Authentication only, exit status 1
Authentication only, exit status 1
Valid auth:
jrm#deb-jrm:~$ static/xfreerdp /v:10.0.0.1 /cert-ignore /u:MyUser /MyDomain /p:GOODPASS +auth-only
Authentication only. Don't connect to X.
Authentication only, exit status 0
Authentication only, exit status 0
I'm using WWW::Mechanize to load the catalog from our product provider into our database. I run this script every 2 hours everyday and it completes in arround 12 minutes by using around 50 simultaneous threads.
Everything was working perfectly, until this weekend. They put their website offline for a scheduled maintenance and, once they where online again, my script no longer worked. After analyzing things, it comes down to the following code failing:
use strict;
use warnings;
use WWW::Mechanize;
my $mec = WWW::Mechanize->new;
$mec->get('https://www.imstores.com/Ingrammicromx/login/login.aspx');
print $mec->content;
The code dies (after about 60 seconds) with the following message:
Error GETing https://www.imstores.com/Ingrammicromx/login/login.aspx:
Can't connect to www.imstores.com:443 at test.pl line 7.
Now, these are the points that are making me difficult to find the problem:
It's not network-related - if I visit the same URL from any of my browsers, I get the page.
If I try the same code on a remote machine that contains an exact copy of my Perl installation, it works.
If I use Net::SSL before WWW::Mechanize, it takes a very LONG time, but finally gets the page.
If I try any other SSL page, like 'https://www.paypal.com', it works and very fast.
Then again, it was working before their scheduled maintenance.
I'm not sure what else to try. If I switch to the non-SSL version, it works, but I don't want to do that since we automate purchasing operations.
Along with many things that have crossed my mind, thinking about why it works on the remote machine and why I can open the page in my browsers in the local one:
Is it possible to get blocked with my SSL public key? Is that possible? If so, what public key is LWP/Mechanize using for SSL sessions and how can I use a different one?
Some data on my current setup:
OS: Windows 7 Ultimate x64
Perl version: 5.16.3 x64
LWP::UserAgent version: 6.05
WWW::Mechanize version: 1.72
IO::Socket version: 1.34
IO::Socket::SSL version: 1.85
Net::SSL version: 2.85
Crypt::SSLeay version: 0.64
Thanks in advance for any helpful comment.
Here's the actual reason for the problem: You need to use SSLv3 or TLS1 instead of TLS1.2 to connect to that server. This is probably why it worked when you used Net::SSL first; I believe it tries different ciphers in a way that WWW:Mechanize doesn't.
This is how I found it:
I tried connecting from several different servers, and I find that the ones that worked have an older SSL version. I then checked the difference between what ciphers are used in the versions, and tried connecting with different ciphers.
When I connect using TLS1.2, I get:
$ openssl s_client -connect www.imstores.com:443 -tls1_2
CONNECTED(00000003)
write:errno=54
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 322 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---
But when I connect with SSLv3 or TLS1, I get:
$ openssl s_client -connect www.imstores.com:443 -tls1
CONNECTED(00000003)
depth=0 /serialNumber=O3gPUAuGGROuHEhlyLaeJfj7SOn6tFTx/C=US/O=www.imstores.com/OU=GT29846307/OU=See www.geotrust.com/resources/cps (c)11/OU=Domain Control Validated - QuickSSL(R) Premium/CN=www.imstores.com
verify error:num=20:unable to get local issuer certificate
[...and so on, including server certificate...]
Exactly how to make WWW:Mechanize use TLS1 or SSLv3 is left as an exercise to the student.