Charles monitor terminal request - charles-proxy

Is there any way to monitor request of application such as terminal? Now I can monitor chrome but other application not.
My charles's version is 4.2

I think it depends on the command line you are trying to monitor. To purely capture http requests made from terminals you just need to set the environment variable 'http_proxy' such as:
$ export http_proxy="http://localhost:8888"
$ curl "http://www.google.com"
This will make Charles capture the HTTP request to Google, but this may not happen with all applications started from this terminal. You'll probably have to find the way to configure the proxy on those other applications.
Just as an example, if you want to capture http requests from a java application you are developing, you will need to add the proper proxy configuration to the java command line, something like:
$ JAVA_FLAGS="-Dhttp.proxyHost=localhost -Dhttp.proxyPort=8888 -Dhttps.proxyHost=localhost -Dhttps.proxyPort=8888"
$ java $JAVA_FLAGS ...
To enable HTTPS sniffing with Charles you will need to add the certificate to the JVM's keystore with:
$ keytool -import -alias charles -file charles-ssl-proxying-certificate.cer -keystore $JAVA_HOME/jre/lib/security/cacerts
Please, notice that
the cacerts file location may vary depending on the java version
(on Java 10 it's under $JAVA_HOME/lib/security/)
the password for the
cacerts file, if unchanged, is: changeit (so consider changing it)
Hope this helps.

Related

How to configure VS Code to remember proxy settings (Windows)

I'm a little bit fed up of this window:
I checked the configuration and I added the proxy URL to the http.proxy entry as described here:
"http.proxy": "http://frustratedusername:password#pesky.proxy.com:8080/"
But it didn't work. Then, I tried setting the http_proxy and https_proxy environment variables, but it didn't work neither.
Is there any way to make VS Code remember the proxy settings?
Remembering proxy credential should now be supported, since VSCode 1.51 (Oct. 2020), and confirmed with VSCode 1.52 (Nov. 2020)
Remember proxy credentials#
We are overhauling the login dialog that shows when a network connection requires authentication with a proxy.
A new setting, window.enableExperimentalProxyLoginDialog: true, will enable this new experience that we plan to make the default in a future release.
Theme: GitHub Light
The dialog will appear inside the VS Code window and offer a way to remember the credentials so that you do not have to provide them each time you start VS Code.
Credentials will be stored in the OS standard credential store (keychain on macOS, Windows Credential Manager on Windows, and gnome keyring on Linux).
We still only show this dialog once per session, but might revisit this decision in the future. You will see the dialog appear again in case the credentials you selected to be remembered are not valid. Providing them again allows you to change them.
I tried with proxy switch in the command itself.
Something like this and it worked for me:
node-gyp configure --proxy=http://proxy.server.name:port
Tried with this install nodejs package.
then open command prompt and enter npm commands.
set the proxy using below commands
npm set proxy http://name:password#gtpproxy.proxy.com:8080
npm set https-proxy http://name:password#gtpproxy.proxy.com:8080
npm config set strict-ssl false -g
Note:
Replace name with email
Replace password with the actual password
Replace gtpproxy with the proxy address of the location.
You will have to URL-encode your username and/or password. For example, if your username contains a domain, like e.g. DOMAIN\username, you'll have to URL-encode the backslash. Thus, you need to use DOMAIN%5Cusername. The same goes for your password, URL-encode each and every non-ascii character.

Secure Socket Layer (SSL) failure? [OE101C]

I have a process that uses web services via sockets/https. My code is erroring out all of a sudden when using sockets. I'm guessing it's something to do with progress internal certificate manager.
These are the errors that are occurring.
---------------------------
Error
---------------------------
Secure Socket Layer (SSL) failure. error code -54: certificate signature failure: for b0f3e76e.0 in n:\progra~1\oe101c\certs (9318)
---------------------------
Error
---------------------------
Connection failure for host api.constantcontact.com port 443 transport TCP. (9407)
---------------------------
Error
---------------------------
Invalid socket object used in WRITE method. (9178)
I then decided to get the Cert Chain from the site who's service I was using. I ran this command to get this.
openssl s_client -connect api.constantcontact.com:443 -showcerts > certs.out
After getting the certs I extracted each one into it's own file and tiled them cert1.cer, cert2.cer and cert3.cer. I registered them with the certutil and the error was still occurring.
I then converted all three of them using this command.
openssl x509 -in cert1.cer -out cert1.pem -outform PEM
Then tried registering them again and still no solution.
I registered them in proenv this way.
certutil -import cert1.pem
they imported correctly but I am still getting this error. Is there something that I am missing or could this be something entirely different. In the original error the hash b0f3e76e.0 is in fact being generated by the 3rd cert. I attempted to delete the hash in the certs folder and regenerate it. I'm completely clueless at this point. The app has worked for awhile and i remember having this issue in the past but can't remember what fixed it. Seems as though when someone was changing from a virtual drive to a physical drive that progress is installed on this error started popping back up.
Thanks
Sorry I miss understood. YOu can google the b0f3e76e.0 and try to find the rootCA for it. Then copy the contents of it and using certutil -import on it to see if that works for you.
For example:
URL https://alphassl.com.ua/support/root.pem
copy and paste it to notepad. Save it as rootCA.pem
use certutil -import rootCa.pem. This will give you the certificate the ABL program is looking for to hand shake withthe server you are using for ssl socket connection.
Again sorry for misunderstanding.
The server side may have changed the certificate. YOu may want to re-import the certificate from the server using
openssl s_client -connect api.constantcontact.com:443 -showcerts > certs.out
Then after doing the import, see which hash formatted file you are getting the error. You can google it and find the certificate and use certutil -import again for that specific hash formatted certificate.
I had experienced this types of issue before where they all of a suddent change the certificate which can have chain certificates. Also generates from different certificate authority leveling their own company name.
It also could be that the certificate expired. I just looked at the rootCA on the URL I have mentioned, the validity is expired.
Validity
Not Before: Sep 1 12:00:00 1998 GMT
Not After : Jan 28 12:00:00 2014 GMT
You need to find one that has the valid date for expiration.
this solution is in startup.pf in C:\Progress\OpenEdge, edit and add in last line -sslverify 0 ...
example.
-cpinternal ISO8859-1
-cpstream ISO8859-1
-cpcoll Spanish9
-cpcase Basic
-d dmy
-numsep 44
-numdec 46
-sslverify 0

Proxy setting in gsutil tool

I use gsutil tool for download archives from Google Storage.
I use next CMD command:
python c:\gsutil\gsutil cp gs://pubsite_prod_rev_XXXXXXXXXXXXX/YYYYY/*.zip C:\Tmp\gs
Everything works fine, but if I try to run that command from corporate proxy, I receive error:
Caught socket error, retrying: [Errno 10051] A socket operation was attempted to an unreachable network
I tried several times to set the proxy settings in .boto file, but all to no avail.
Someone faced with such a problem?
Thanks!
Please see the section "I'm connecting through a proxy server, what do I need to do?" at https://developers.google.com/storage/docs/faq#troubleshooting
Basically, you need to configure the proxy settings in your .boto file, and you need to ensure that your proxy allows traffic to accounts.google.com as well as to *.storage.googleapis.com.
A change was just merged into github yesterday that fixes some of the proxy support. Please try it out, or specifically, overwrite this file with your current copy:
https://github.com/GoogleCloudPlatform/gsutil/blob/master/gslib/util.py
I believe I am having the same problem with the proxy settings being ignored under Linux (Ubuntu 12.04.4 LTS) and gsutils 4.2 (downloaded today).
I've been watching tcpdump on the host to confirm that gsutils is attempting to directly route to Google IPs instead of to my proxy server.
It seems that on the first execution of a simple command like "gsutil -d ls" it will use my proxy settings specified .boto for the first POST and then switch back to attempting to route directly to Google instead of my proxy server.
Then if I CTRL-C and re-run the exact same command, the proxy setting is no longer used at all. This difference in behaviour baffles me. If I wait long enough, I think it will work for the initial request again so this suggests some form on caching taking place. I'm not 100% of this behaviour yet because I haven't been able to predict when it occurs.
I also noticed that it always first tries to connect to 169.254.169.254 on port 80 regardless of proxy settings. A grep shows that it's hardcoded into oauth2_client.py, test_utils.py, layer1.py, and utils.py (under different subdirectories of the gsutils root).
I've tried setting the http_proxy environment variable but it appears that there is code that unsets this.

lighttpd - don't terminate on invalid ssl key/cert?

I'm using lighty on an embedded device where power-users are allowed to manipulate the system configuration using an web interface. The users can upload an PEM file containing valid private key and certificate information.
Now I wonder how to avoid that lighty will not start if the file is corrupt? One idea is to check the file before installing it, but it seems that there is no easy solution.
My other idea is to configure lighty in a way that it will recognize the file is invalid (it does in fact) but it should not terminate. Instead it should run without SSL features, so HTTP only.
Is there a way to configure lighty for that? or is there a better solution?
This should provide just enough check options for your need
openssl verify --help
You could use a perlscript to comment out any ssl related config block if the cert check fails, but that is beyond this question.
Of course you need to edit the init/service script which starts lighttpd/lighttpd.service

Slow wget speeds when connecting to https pages

I'm using wget to connect to a secure site like this:
wget -nc -i inputFile
where inputeFile consists of URLs like this:
https://clientWebsite.com/TheirPageName.asp?orderValue=1.00&merchantID=36&programmeID=92&ref=foo&Ofaz=0
This page returns a small gif file. For some reason, this is taking around 2.5 minutes. When I paste the same URL into a browser, I get back a response within seconds.
Does anyone have any idea what could be causing this?
The version of wget, by the way, is "GNU Wget 1.9+cvs-stable (Red Hat modified)"
I know this is a year old but this exact problem plagued us for days.
Turns out it was our DNS server but I got around it by disabling IP6 on my box.
You can test it out prior to making the system change by adding "--inet4-only" to the end of the command (w/o quotes).
Try forging your UserAgent
-U "Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-GB; rv:1.9.0.1) Gecko/2008070206 Firefox/3.0.1"
Disable Ceritificate Checking ( slow )
--no-check-certificate
Debug whats happening by enabling verbostity
-v
Eliminate need for DNS lookups:
Hardcode thier IP address in your HOSTS file
/etc/hosts
123.122.121.120 foo.bar.com
Have you tried profiling the requests using strace/dtrace/truss (depending on your platform)?
There are a wide variety of issues that could be causing this. What version of openssl is being used by wget - there could be an issue there. What OS is this running on (full information would be useful there).
There could be some form of download slowdown being enforced due to the agent ID being passed by wget implemented on the site to reduce the effects of spiders.
Is wget performing full certificate validation? Have you tried using --no-check-certificate?
Is the certificate on the client site valid? You may want to specify --no-certificate-check if it is a self-signed certificate.
HTTPS (SSL/TLS) Options for wget
One effective solution is to delete https:\\.
This accelerate my download for around 100 times.
For instance, you wanna download via:
wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
You can use the following command alternatively to speed up.
wget data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2