How to fix Rebol Cheyenne 404 with domain name and configuration file? - webserver

On Windows Server 2008 I created
reboltutorial.com [
root-dir %/www/
default [%index.html %index.rsp %index.php]
]
It returns 404 error page not found. Cheyenne only works with IP address ( http://88.191.118.45:2011/ ok http://reboltutorial.com ok also but on ISS 7).
How to fix this ?
Update: error log
Error in [conf-parser] : Can't access file www/ws-apps/ws-test-app.r
Error in [conf-parser] : Can't access file www/ws-apps/chat.r !

You have to make sure you have a directory named www in the map you installed cheyenne in. (Default dir %www/).
After that make sure the missing www/ws-apps/ws-test-app.r and www/ws-apps/chat.r files also exist.

First of all, HTTP 1.1 sends the full URL over the TCP session (including the domain-name typed on the Location: line). That's how one IP can serve multiple domains (Apache calls this VirtualHosts), so browsing by IP will be sending a different URL to whatever web server gets the request.
Thus it's not a great technical mystery for your machine to be set up in a way that it serves a different page for an IP address vs. a domain. But since you put "reboltutorial.com" in your Cheyenne config it seems that--if anything--that would be working while the IP address version would be failing.
I don't run Cheyenne, and you haven't offered up more details about your configuration. But since no one has answered I looked at the source tree to offer some advice on what you might try.
We know Cheyenne is getting the request and making the decision to hand back the 404, because of the format of the error. The Apache one looks different:
http://reboltutorial.com/show-me-apache-404/
http://88.191.118.45:2011/show-me-cheyenne-404/
So Cheyenne is getting the request. That much we know. The decision to serve up a 404 is made in send-response in the HTTPd.r file. It's a pretty simple test:
if all [file? out/content not exists? out/content][
log/error ["File not found: " mold out/content]
out/code: 404
out/content: none
]
If that's the place your 404 is being generated, then there should be a "File not found:" in your log and a mention of what file that is. If not, something strange is going on. You can throw something in there (even a quit if you're suspicious of the printed output) just to make sure it's getting to the line.
(FYI: In the future when you're looking at other Cheyenne problems, there is a is a setting called "verbosity" which affects the output and you can see in on-received in the HTTPd.r file that for verbosity > 0 it will log when it receives a request:
if verbose > 0 [
log/info ["================== NEW REQUEST =================="]
log/info ["Request Line=>" trim/tail to-string data]
]
If you bump up the verbosity level you might find an indication of the problem pretty quickly. If not, the code is fairly readable and you can put in your own trace points.)

Related

Getting Started With PeerJS

I am trying the simplest example I can, pulled directly from their website. Here is my entire html file, with code taken exactly from https://peerjs.com/index.html:
<script src="https://unpkg.com/peerjs#1.3.1/dist/peerjs.min.js"></script>
<script>
var peer = new Peer();
var conn = peer.connect('another-peers-id');
// on open will be launch when you successfully connect to PeerServer
conn.on('open', function(){
// here you have conn.id
conn.send('hi!');
});
</script>
In Chrome and Edge I get this in the console:
peerjs.min.js:64 GET https://0.peerjs.com/peerjs/id?ts=15956160926060.016464029424720694 net::ERR_CONNECTION_REFUSED
In Firefox I get this:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://0.peerjs.com/peerjs/id?ts=15956162489620.8436734374800061. (Reason: CORS request did not succeed).
What am I doing wrong?
#reyad has requested "a full trace of requests and responses". Here's what I see in my network tab in Firefox:
And here's Chrome:
And a tiny bit more Chrome:
[Note: It would have been better if you could provide a full trace of requests and responses. This problem may occur for several reasons. I'll state two solutions. So, try those. If those doesn't work, provide full trace of requests and responses.]
1. First Solution:
Sometimes, this type of error occurs because of self-signed certificate. To solve this problem, open developer tools/options, then go to network tab. You'll see a list of requests. Select the request which was failed because of CORS(i.e. which gave you this Reason: CORS request did not succeed). Open it(i.e. click it). If your problem is related to cert you'll see the following error message:
AN ERROR OCCURED: SEC_ERROR_INADEQUATE_KEY_USAGE
To solve this problem, go to url that is the reason of this problem and accept the certificate manually.
2. Second solution:
Check the request(which is the reason of CORS) in the network tab of developers tools/options(same as described in 1. First Solution). You'll find a Transferred column. See, what's written in the Transferred column of the failed request. If it is written Blocked By Some Ad-Blocker, then disable the Ad-Blocker. Your request will work fine.
[P.S.]: These solutions are proposed on assumptions. Hope these works. If these two do not work, then please provide more info about requests and responses. And also check this.
3. Third and final solution:
[Note: This solution may not solve your problem directly, but it'll give you alternative solution and also insight about what your problem is and how to work around it]
Before reading the solution below, read this to understand how Access-Control-Allow-Origin works(it is the reason for CORS error).
Let me first explain how peerjs works:
PEERJS works based on PEER ID. So, you've to get some PEER ID either from the PEERJS CLOUD SERVER or you've to provide yourself one in the PEER CONSTRUCTOR i.e. new Peer("some-peer-id"). Peer id has to be unique, cause its necessary to detect all the users uniquely. And, peerjs uses this PEER ID to send and receive data from user to user.
Now, you should know that, you're using PEERJS CLOUD SERVER to get/generate unique peer id which is the default server PEERJS uses unless you specified some other server to use.
Now let me explain why you're facing this problem:
As you already know how CORS works, you may have already guessed, that https://unpkg.com/peerjs#1.3.1/dist/peerjs.min.js(the downloaded js file) is calling https://0.peerjs.com to retrieve/generate new unique PEER ID. But, this request by https://your.website.com does not have Access-Control-Allow-Origin access for some reason, it may also be a middleware problem. So, its difficult to tell where the problem is actually occuring. But one thing for sure, it's not your fault of writing code :D.
I hope all the concepts is clear to you I've stated above.
Now, to solutions:
Alternative-appraoch-1 (Using PEERJS CLOUD SERVER AND Your own provided id):
In this approach you've to generate your own unique PEER ID. So, "https://your.website.com" does not have to call "https://0.peerjs.com" for unique peer id. [Note: make your peer id large enough so that its always unique, at least 64 chars long]
In this way, you can avoid the CORS problem.
Update:
I just saw an new issue in github, which says the public peerjs cloud server is now unstable or does not work properly. It just gives error like: Firefox cannot establish a connection with the server at the address wss://0.peerjs.com/peerjs?key=peerjs&id=123222589562487856955685485555&token=ocyxworx62i and in Chrome: Error in connection establishment: net::ERR_CONNECTION_REFUSED. For details check here. So, its better, you use your own server(see the next approach).
Alternative-appraoch-2 (Using your own peerjs server):
You can host your own peerjs server instead of PEERJS CLOUD SERVER. In this way, you can allow access to anyone/any website you want. If you want know how to host a peerjs server, you may visit here.
[P.S.]: I have studied pearjs issues in github. After reading all those issues, it seems, it is better to use your own server rather than using pearjs cloud. There are a lot of various problems with each new release of peerjs. And mostly related with connection with peerjs cloud and also peerjs cloud is not stable I guess. They were hosting it in 0.peerjs.com:9000 before(not secure). But now in 0.peerjs.com:443.
I haven't use peerjs before nor set up peerjs server. If you want to set up one, I hope the community would be able help you on how to do that properly.
What I understand from your question is that there is an issue of (CORS => Cross-origin resource sharing ), Maybe what I am suggesting is not very intuitive.
First : download the "https://unpkg.com/peerjs#1.3.1/dist/peerjs.min.js" in your local directory . and then incklude the local javascript code to the html.
like: <script src="./peerjs.min.js"></script>
Second :
you are using var peer = new Peer();
but please provide an extra unique id from your side. for example, I just created a random id and provided it.
StackOverflow link: https://stackoverflow.com/questions/21216758/peerjs-set-your-own-peerid#:~:text=1%20Answer&text=Provide%20a%20peer%20id%20when,to%20under%20Create%20a%20peer.
var a_random_id = Math.random().toString(36).replace(/[^a-z]+/g, '').substr(2, 10);
var peer = new Peer(a_random_id, {key: 'myapikey'});
Third : the best option is to run PeerServer: A server for PeerJS of your own.
If you don't want to develop anything, just enter a few commands below.
Install the package globally:
$ npm install peer -g
Run the server:
$ peerjs --port 9000 --key peerjs --path /myapp
Started PeerServer on ::, port: 9000, path: /myapp (v. 0.3.2)
Check it: http://127.0.0.1:9000/myapp It should return JSON with name, description, and website fields.
details:https://github.com/peers/peerjs-server

Why does BitBake error if it can't find www.example.com?

BitBake fails for me because it can't find https://www.example.com.
My computer is an x86-64 running native Xubuntu 18.04. Network connection is via DSL. I'm using the latest versions of the OpenEmbedded/Yocto toolchain.
This is the response I get when I run BitBake:
$ bitbake -k core-image-sato
WARNING: Host distribution "ubuntu-18.04" has not been validated with this version of the build system; you may possibly experience unexpected failures. It is recommended that you use a tested distribution.
ERROR: OE-core's config sanity checker detected a potential misconfiguration.
Either fix the cause of this error or at your own risk disable the checker (see sanity.conf).
Following is the list of potential problems / advisories:
Fetcher failure for URL: 'https://www.example.com/'. URL https://www.example.com/ doesn't work.
Please ensure your host's network is configured correctly,
or set BB_NO_NETWORK = "1" to disable network access if
all required sources are on local disk.
Summary: There was 1 WARNING message shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
The networking issue, the reason why I can't access www.example.com, is a question for the SuperUser forum. My question here is, why does BitBake rely on the existence of www.example.com? What is it about that website that is so vital to BitBake's operation? Why does BitBake post an Error if it cannot find https://www.example.com?
At this time, I don't wish to set BB_NO_NETWORK = "1". I would rather understand and resolve the root cause of the problem first.
Modifying poky.conf didn't work for me (and from what I read, modifying anything under Poky is a no-no for a long term solution).
Modifying /conf/local.conf was the only solution that worked for me. Simply add one of the two options:
#check connectivity using google
CONNECTIVITY_CHECK_URIS = "https://www.google.com/"
#skip connectivity checks
CONNECTIVITY_CHECK_URIS = ""
This solution was originally found here.
For me, this appears to be a problem with my ISP (CenturyLink) not correctly resolving www.example.com. If I try to navigate to https://www.example.com in the browser address bar I just get taken to the ISP's "this is not a valid address" page.
Technically speaking, this isn't supposed to happen, but for whatever reason it does. I was able to work around this temporarily by modifying the CONNECTIVITY_CHECK_URIS in poky/meta-poky/conf/distro/poky.conf to something that actually resolves:
# The CONNECTIVITY_CHECK_URI's are used to test whether we can succesfully
# fetch from the network (and warn you if not). To disable the test set
# the variable to be empty.
# Git example url: git://git.yoctoproject.org/yocto-firewall-test;protocol=git;rev=master
CONNECTIVITY_CHECK_URIS ?= "https://www.google.com/"
See this commit for more insight and discussion on the addition of the www.example.com check. Not sure what the best long-term fix is, but the change above allowed me to build successfully.
If you want to resolve this issue without modifying poky.conf or local.conf or any of the files for that matter, just do:
$touch conf/sanity.conf
It is clearly written in meta/conf/sanity.conf that:
Expert users can confirm their sanity with "touch conf/sanity.conf"
If you don't want to execute this command on every session or build, you can comment out the line INHERIT += "sanity" from meta/conf/sanity.conf, so the file looks something like this:
Had same issue with Bell ISP when accessing example.com gave DNS error.
Solved by switching ISP's DNS IP to Google's DNS (to avoid making changes to configs):
https://developers.google.com/speed/public-dns/docs/using

modsecurity for iis not logging request body

We have an ASP.Net/angular/web api on IIS 7.5 that is being called with bad encoding and we're trying to get the request body logged so we can show the problem to the caller.
We googled around and found ModSecurity, so we installed it and are giving it a try - but only for the Audit Logging portion. Unfortunately, neither C nor I seem to be logging anything, no matter what we do. I've seen some other Oflow posts that I infer to mean ModSecurity only logs those types for certain incoming requests (.html logs C but nothing else does kind of thing). WebApi and angular might be confusing it, but I'm not sure. Nothing I've tried seems to work.
Here's our configuration:
# -- Audit log configuration -------------------------------------------------
# Log the transactions that are marked by a rule, as well as those that
# trigger a server error (determined by a 5xx or 4xx, excluding 404,
# level response status codes).
#
SecAuditEngine On
SecRequestBodyAccess On
# Log everything we know about a transaction.
SecAuditLogParts ABCIJDEFHZ
# Use a single file for logging. This is much easier to look at, but
# assumes that you will use the audit log only ocassionally.
#
SecAuditLogType Serial
SecAuditLog E:\ModLogs\modsec_audit4.log
Is there something else I've got to do to get ModSecurity to do the C/I logging?
ModSecurity 2.9.1 (downloaded today), IIS 7.5, Web Api, Angular, ASP.Net
Thanks
This looks to be a known bug with ModSecurity and IIS: https://github.com/SpiderLabs/ModSecurity/issues/538
Near the bottom of that bug there are comments that this can be resolved by setting this in your config:
SecStreamInBodyInspection On
There are also some posts suggesting disabling DynamicCompression in IIS is necessary.

Perl SOAP::WSDL accessing HTTPS Unathorized error

I'm trying to generate a Perl library to connect to a WebService. This webservice is in an HTTPS server and my user has access to it.
I've executed wsdl2perl.pl several times, with different options, and it always fails with the message: Unauthorized at /usr/lib/perl5/site_perl/5.8.8/SOAP/WSDL/Expat/Base.pm line 73.
The thing is, when I don't give my user/pass as arguments, it doesn't even asks for them.
I've read [SOAP::WSDL::Manual::Cookbook] (http://search.cpan.org/~mkutter/SOAP-WSDL-2.00.10/lib/SOAP/WSDL/Manual/Cookbook.pod) and done what it says about HTTPS: Crypt::SSLeay is instaleld, and both SOAP::WSDL::Transport::HTTP and SOAP::Transport::HTTP are modified.
Can you give any hint about what may be going wrong?
Can you freely access the WSDL file from your web browser?
Can someone else in your network access it without any problems?
Maybe the web server hosting the WSDL file requires Basic or some other kind of Authentication...
If not necessary ,I don't recommend you to use perl as a web service client .As you know ,perl is a open-source language,although it do support soap protocol,but its support do not seem very standard.At first,its document is not very clear.And also ,its support sometimes is limited.At last,bug always exists here and there.
So ,if you have to use wsdl2perl,you can use komodo to step into the code to find out what happened.This is just what I used to do when using perl as a web service client.You know ,in the back of https is SSL,so ,if your SSL is based on certificate-authorized,you have to set up your cert path and the list of trusted server cert.You'd better use linux-based firefox to have a test.As I know ,you can set up firefox's cert path and firefox's trusted cert list.If firefox can communicated with your web service server succefully,then,it's time to debug your perl client.
To debug situations with Perl and SOAP, interpose a web proxy so you can see exactly what data is being passed and what response comes back from the server. You were getting a 401 Not authorized, I expect, but there may be more detail in the server response.
Both Fiddler http://docs.telerik.com/fiddler and Charles proxy https://www.charlesproxy.com/ can do this.
The error message you quote seems to be from this line :
die $response->message() if $response->code() ne '200';
and in HTTP world, Unauthorized is clearly error code 401, which means your website asks for a username and password (most probably, some website may "hijack" this error code to cater for other conditions like a filter on the source IP).
Do you have them?
If so, you can
after wdsl2perl has run, find in the created files where set_proxy() is called and change the URL in there to include the username and password like that : ...->set_proxy('http://USERNAME:PASSWORD#www.example.com/...')
or your in code, after instantiating the SOAP::WSDL object, call service(SERVICENAME) on it (for each service you have defined in your WSDL file), which gives you a new object, on which you call transport() to access the underlying transport object on which you can call proxy() with the URL as formatted above (yes it is proxy() here and set_proxy() above); or you call credentials() instead of proxy() and you pass 4 strings:
'HOSTNAME:PORT'
the realm, as given by the webserver but I think you can put anything
the username
the password

Coldfusion FarCry CMS error on start up after server reboot

We're using Farcry CMS which runs on top of ColdFusion. Site was running fine but we are getting this error message after a web server reboot.
"Failed to initialise core type: dmHTML.cfc"
"Parameter 1 of function IsDefined, which is now application.stcoapi.dmHTML.stWebskins.Copy of displayPageCalculatorSelector.displayname, must be a syntactically valid variable name."
Really not sure where to start, could anyone suggest a strategy for troubleshooting this type of error.
Looks like you have a file called "Copy of displayPageCalculatorSelector.cfm" in your dmHTML webskin folder.
Remove this file is the best option.
Or rename it and remove the spaces, e.g. "Copy_of_displayPageCalculatorSelector.cfm"