I am running Ubuntu 12.04 and am trying to record http requests with perl's HTTP::Recorder module. I am following the instructions here: http://metacpan.org/pod/HTTP::Recorder
I have the following perl script running:
#!/usr/bin/perl
use HTTP::Proxy;
use HTTP::Recorder;
my $proxy = HTTP::Proxy->new();
# create a new HTTP::Recorder object
my $agent = new HTTP::Recorder;
# set the log file (optional)
$agent->file("/tmp/myfile");
# set HTTP::Recorder as the agent for the proxy
$proxy->agent( $agent );
# start the proxy
$proxy->start();
And I have changed firefox settings so that it uses port 8080 on the localhost as the proxy. Here is a snapshot of my settings:
When I try to visit a website with firefox, however, I get the following error:
Content Encoding Error
The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression.
Not sure what to do. And when I visit:
http://http-recorder
(Where my recorded activity is supposed to show up) I do see that GET requests are being logged. For example, if I try to visit google:
$agent->get('http://www.google.com');
Edit: I should also mention that ubuntu is running inside of virtualbox, not sure if that is messing with anything.
Related
I'm using these settings in install4j.vmoptions (install4j 7.0.4):
# Clear out cached proxy information
-Dinstall4j.clearProxyCache=true
# and hopefully prevent install4j from reloading it from the default browser
-Dinstall4j.noProxyAutoDetect=true
# Unconditionally shows proxy config dialog
-Dinstall4j.showProxyConfig=true
# Log issues to %TEMP%\install4j_error.log
-Dinstall4j.showConnectError=true
I do not get a proxy dialog; the output in the error log is this:
java.io.IOException: Proxy password required. Please set the parameter -DproxyAuthPassword=[password].
at com.install4j.runtime.installer.helper.content.HttpRequestHandler.askForProxyPassword(HttpRequestHandler.java:335)
at com.install4j.runtime.installer.helper.content.HttpRequestHandler.getURLConnection(HttpRequestHandler.java:233)
at com.install4j.runtime.installer.helper.content.HttpRequestHandler.connect(HttpRequestHandler.java:124)
at com.install4j.runtime.installer.helper.content.Downloader.connect(Downloader.java:151)
at com.install4j.runtime.installer.helper.content.Downloader.connect(Downloader.java:24)
at com.install4j.runtime.installer.helper.content.HttpRequestHandler.connect(HttpRequestHandler.java:117)
at com.install4j.runtime.installer.helper.content.Downloader.connect(Downloader.java:146)
at com.install4j.gui.c.h.c(ejt:72)
at com.install4j.gui.c.h.run(ejt:38)
This runs contrary to the following two assumptions:
1) install4j should record the proxy settings.
2) install4j should show a proxy dialog with these settings.
What did I miss?
UPDATE: -DproxyAuth=false would change the error message, it would now complain about certificate problems.
This concerns the JRE downloads in the install4j IDE and will be fixed in 7.0.7.
Please write to support#ej-technologies.com to get a build that contains the fix.
Alternatively, you can download JRE bundles manually from
https://download.ej-technologies.com/bundles/list
I see a 500 error when I hit my TYPO3 site in ddev.
ddev logs shows me what's going on:
2018/05/10 21:07:38 [error] 354#354: *45 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught UnexpectedValueException: The current host header value does not match the configured trusted hosts pattern! Check the pattern defined in $GLOBALS['TYPO3_CONF_VARS']['SYS']['trustedHostsPattern'] and adapt it, if you want to allow the current host header 'typo3-master.ddev.local:8000' for your installation. in /var/www/html/typo3/sysext/core/Classes/Utility/GeneralUtility.php:2728
I have ddev v0.18.0 and have run ddev config, and I've confirmed that the AdditionalConfiguration.php contains the generic $GLOBALS['TYPO3_CONF_VARS']['SYS']['trustedHostsPattern'] = '.*';, so that should match anything, right?
TYPO3 v9 seems to not be able to use a trustedHostsPattern that contains a port. You're running on port 8000, for TYPO3 v9 and ddev, please figure out how to get it on port 80. There are troubleshooting instructions on how to figure out what else might be using port 80 at https://ddev.readthedocs.io/en/latest/users/troubleshooting/#unable-listen
I have, in order to process some big data, to set up ckan on a local machine. I've set up the whole system following this guide : http://docs.ckan.org/en/latest/maintaining/installing/install-from-source.html
I wanted to display a preview of a locally loaded file, so the user can actually see it before downloading it. And it doesn't work, because it only works for online files. For instance, it DOES work with this online file but NOT with my own file I upload.
So, I've been interested about Datastore and Datapusher. I've followed every part of the guide, and it appears on my ckan. However, I have an error. Specifically this one :
Upload error: An Error occurred while sending the job: 403 Client Error: Forbidden for url: http://127.0.0.1:8800/job
Here's my most important parts about my production.ini file (copying the whole would be very long) :
ckan.site_url = http://localhost
ckan.plugins = datastore datapusher stats text_view image_view
recline_view recline_graph_view recline_map_view webpage_view
ckan.datapusher.formats = csv xls xlsx tsv application/csv
application/vnd.ms-excel
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
ckan.datapusher.url = http://127.0.0.1:8800/
I truly have no idea about what my problem could be, I tried to change the datapusher.url to 0.0.0.0 as the guide suggested, but it doesn't work either.
If the data to be added to CKAN is in a file on your computer, select “Upload a file” option. CKAN will give you a file browser to select it. You should use link to a file option just for publicly available resources.
Have you installed datapusher also? Its a separate process running on port 8800. CKAN uses datastore to be able to have a grid view of tabular data. Data needs to be pushed through datapusher to be used by datastore.
Yes, you need to set up the Datapusher.It's not activated by default.
Pull the datapusher code, install the dependencies and run it using:
python datapusher/main.py deployment/settings.py
The instructions to configure the settings are on the repository.
Here's the datapusher manual: http://docs.ckan.org/projects/datapusher/en/latest/
Here's the repository: https://github.com/ckan/datapusher
Had the exact same error message.
This post solved my issue though.
short: insert/check the following in your virtualhost in /etc/apache2/sites-enabled/datapusher.conf
<Directory /etc/ckan>
Options All
AllowOverride All
Require all granted
</Directory>
I have a Perl script that uses WWW::Mechanize to connect to a site over
https, and that script just stopped working the other day. The status
and error message I get back are 500 and "Can't connect to
jobs.illinois.edu:443". The URL I'm trying to connect to is
https://jobs.illinois.edu/. I can connect from my browser (Firefox).
My platform is Linux -- up-to-date Arch Linux. I can also connect
(using WWW::Mechanize) to other https sites.
I tried using LWP::UserAgent, and the behavior is the same.
I'm using ssl_opts => { SSL_version => 'TLSv1' }; I don't remember why
I added that -- it may have been necessary to get it working at some
point.
Any ideas on how to fix this, or how I might get more information as
to what the problem is? Are there other ssl options I can try?
I have a feeling there was some slight configuration change on the
site that led to this problem -- maybe some SSL-protocol version
change or something like that. (I don't think I updated anything
on my machine inbetween the times it worked and stopped working.)
Thanks.
Here's sample code that fails:
#!/usr/bin/perl
use strict;
use warnings;
use constant AJB_URL => 'https://jobs.illinois.edu/academic-job-board';
use WWW::Mechanize;
my $mech = WWW::Mechanize->new( ssl_opts => { SSL_version => 'TLSv1' } );
$mech->get( AJB_URL );
It returns:
Error GETing https://jobs.illinois.edu/academic-job-board: Can't connect to jobs.illinois.edu:443 at ./test2.pl line 12.
... that script just stopped working the other day.
Which in most cases is caused by server-side or client-side changes. But I assume that you did not make any changes on the client side.
Calling your code with perl -MIO::Socket::SSL=debug4... gives:
DEBUG: ...SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
Looking at the SSLLabs report you see two trust paths, where one requires an extra download. The root-CA "USERTrust RSA Certification Authority" for the first trust path is not installed on my system (Ubuntu 14.04), and I guess it is not installed on yours (no information about your OS is known, so just guessing). This means the second trust chain will be used and the relevant Root-CA "AddTrust External CA Root" is also installed on my system. Unfortunately this trust chain is missing an intermediate certificate ("Extra download"), so the verification fails.
To fix the problem, find the missing root-CA which should match the fingerprint 2b8f1b57330dbba2d07a6c51f70ee90ddab9ad8e and use it:
$ENV{PERL_LWP_SSL_CA_FILE} = '2b8f1b57330dbba2d07a6c51f70ee90ddab9ad8e.pem';
Looking at the certificate you see that it was issued on 22 May 2015, i.e. three days ago. This explains why the problem happened just now.
I am trying to setup ASP.Net MVC 2 application on Linux environment. I've installed Ubuntu 10.10 on VirtualBox, then installed Mono 2.8 from sources. After that I have installed nginx and configure it as recommended here.
Unfortunately, FastCGI shows me standard error 500 page:
No Application Found
Unable to find a matching application for request:
Host localhost:80
Port 80
Request Path /Default.aspx
Physical Path /var/www/mvc/Default.aspx
My application is located in /var/www/mvc directory. I've tried to create some stub Default.aspx file and place it in root dir of my application, but it didn't help, same error occured.
Thanks.
I've been doing some testing with this as well, using all ubuntu10.10 binaries.
From what I can make from it, either nginx fails to pass the hostname of the mono server fails to receive it over the fastcgi protocol. Anyhow, the tutorial line:
fastcgi-mono-server2 /applications=www.domain1.xyz:/:/var/www/www.domain1.xyz/ /socket=tcp:127.0.0.1:9000
doesn't work. Removing the hostname makes the thing work:
fastcgi-mono-server2 /applications=/:/var/www/www.domain1.xyz/ /socket=tcp:127.0.0.1:9000
but this of course blocks the use of multiple virtual mono hosts.
Since you are running ASP.NET MVC 2 application you should use fastcgi-mono-server4.
Adding following line in /etc/nginx/fastcgi_param resolves the issue for me. It also allows to use multiple virtual hosts.
fastcgi_param HTTP_HOST $host;
Does your application work with xsp (xsp4 if you are using .net 4.0)? You'll want to make sure that is working before you try configuring the connection to another web server.
Does nginx know where to find mono? You most likely have a parallel install and it won't be in the default paths.
I use apache, but you may still find some of the instructions on my blog useful:
http://tqcblog.com/2010/04/02/ubuntu-subversion-teamcity-mono-2-6-and-asp-net-mvc/
I had this problem just now, I too had been following the document on the mono site:
I was trying to start the fastcgi-mono-server as it suggested:
sudo fastcgi-mono-server4 /applications=www.domain1.xyz:/:/var/www/www.domain1.xyz/ /socket=tcp:127.0.0.1:9000 &
However when I did it like that I got the same problem as you. I changed it to this:
sudo fastcgi-mono-server4 /applications=/:/var/www/www.domain1.xyz/ /socket=tcp:127.0.0.1:9000 &
And it worked ( I had to type in www.domain1.xyz/Home/Index to see my MVC page, not worked out how to stop it looking for www.domain1.xyz/default.aspx yet XD ).
You need to make sure the domain set in your site config matches the domain passed to the fastcgi server. So for example if your default site (/etc/nginx/sites-enabled/default) has the following config:
server {
...
server_name www.domain1.xyz;
...
}
You would need to pass that domain into the fastcgi server:
sudo fastcgi-mono-server4 /applications=www.domain1.xyz:/:/var/www/www.domain1.xyz/ ...
Then when you access the site it will obviously need to be with that domain you set.