Which web protocols are possible for the url of a CSS background image? - background-image

I am building an application that requires a str_replace() for any strings http: or https: in the URL of a background-image. Does anyone know of any other protocols that are possible for the URL of a background-image? For example, are any of the following possible, or has anyone seen any of the following in a CSS background-image URL?
TCP
UDP
ICMP
POP
FTP
IMAP
As one example, is this possible for CSS?
#myDiv { background-image: URL("ftps://mysite.com/pub/myimage.png"); }
Thanks for any insight!

It's clearly legal syntax, following the CSS standard by W3C. You may put any legal URI there, but it depends on the browser whether it will work or not.
TCP, UDP and ICMP are a different type of protocols, so they won't work, but ftp, pop and imap can work in theory. See here for syntax: https://en.wikipedia.org/wiki/URI_scheme
You may also replace all the images with data URIs, basically embedding everything.

Very interesting question.
The CSS2 standard doesn't seem to prescribe anything in particular as long as we have valid URIs as per RFC3986, nor does CSS1.
Whether your URIs will actually work then depends on whether your browser can access that URI (and you have the appropriate permissions, etc).
I imagine this is intended, for a reason: it means that the CSS specification is transport layer agnostic, which makes plenty of sense.
In practice you can thus, for example, link to HTTPS resources in a CSS1-compliant stylesheet, even if CSS1 predates the RFC for HTTPS.

Related

Why use external libraries (like libcurl) vs. sockets for sending HTTP requests?

I'm new to network programming and have recently been playing around with using sockets in C++.
I have a pretty decent handle on it at this point, and I understand HTTP/TCP/IP packets pretty well.
However, upon doing some research online it seems like the bulk of network programmers suggest using external libraries such as libcurl (or curl++ for c++) for sending HTTP requests.
Considering that HTTP is a text-based protocol, why is this more beneficial/easier than simply sending HTTP requests as text messages using socket programming?
I found a few sites that show that you can do this without too much difficulty: HTTP Requests in C++ without external libraries?,
Simple C example of doing an HTTP POST and consuming the response
It seems like sending HTTP requests is simply a matter of getting the formatting correct and then sending it via a TCP socket. How is this made easier with external libraries?
Please bear with me as I'm new to network programming and eager to learn.
The links you've provided in your question are in a way a pretty good explanation on why you should not code HTTP yourself it: the first link only points to the socket API and does not say anything about HTTP while the second one provides some examples and code which are too much simplified for real world use and will not even work with with typical setup of multiple domains on the same host since the requests are missing the Host field. In other words: these are resources which might look useful to the inexperienced developer but they will actually lead you into trouble.
HTTP is not the simple as it might look. HTTP/0.9 was simple but is no longer supported by many clients. HTTP/1.0 is still kind of simple if restricted to the basic aspects. But there are already enough pitfalls, like using the wrong delimiter between lines and request header/body or not using a Host field when accessing multi-domain hosts.
And once you want to get efficient you want to have multiple requests per TCP connection and compression and then it gets more complex. With HTTP/1.1 it gets even more complex due to the use of chunked data encoding and with HTTP/2 it gets more efficient but way more complex with a binary protocol and interleaved requests and responses.
And this was only HTTP. With HTTPS you have the additional and not trivial TLS layer which has its own pitfalls.
Thus, if you just want to use HTTP and HTTPS it is much better to use established libraries. Of course if you want to learn the innards of HTTP it might be useful to read all the relevant standards, look at packet traces and try to implement your own.

JMAP uses /.well-known for service discovery, would it be considered valid use of RFC 5785?

I was surprised
A JMAP-supporting email host for the domain example.com SHOULD publish a SRV record _jmaps._tcp.example.com which gives a hostname and port (usually port 443).
The authentication URL is https://hostname/.well-known/jmap (following any redirects).
Other autodiscovery options using autoconfig.example.com or autodiscover.example.com may be added to a future version of JMAP to support clients which can’t use SRV lookup.
It doesn't match the original use cases for the well-known URI registry. Stuff like robots.txt, or dnt / dnt-policy.txt. And IPP / CUPS printing works fine without it, using a DNS TXT record to specify a URL. If you can look up SRV records, you can equally look up TXT. And the autodiscovery protocol involves XML which can obviously include a full URI.
E.g. what chance is there of this being accepted by the registry of well-known URIs? Or is it more likely to remain as something non-standard, like made-up URI schemes?
The idea almost certainly came from CalDav, which is already in the registry of well-known URIs. RFC 6532 defines DNS SRV and both DNS TXT and a well-known URI. So JMAP's proposal is perfectly well-founded.
It might sound strange that the URL is authenticated against, but this too is justified in CalDav. I think it helps shard users between multiple servers.
IMO it's not a good way to use SRV. On the other hand, JMAP is specifically considering clients that don't use SRV. One presumes the CalDav usage is for similar reasons.
It does seem bizarre that presumably web-centric implementations can't manage to discover full URIs (i.e. if they're using the autoconfig protocol).
I think you have to remember that these approaches start from user email addresses. The hallowed Web Architecture using HTTP URIs for everything... well, let's say it doesn't have much to say about mailto: URIs. DNS has got to be the "right" way to bridge the gap from domains to URIs. But in a web-centric world where you don't necessarily know how to resolve DNS, or only how to look up IPs to speak HTTP with? There are going to be some compromises.

Is Fiddler a Good Building Block for an Internet Filter?

So I want to make my own Internet Filter. Don't worry, i'm not asking for a tutorial. I'm just wondering if Fiddler would make a good backbone for it. I'm a little worried because it seemed that there's a few things Fiddler can't always pick up - or that there are workarounds. So, my question:
Would Fiddler grab all web data? i.e, chats, emails, websites, etc.
Are there any known workarounds?
Any other reasons not to use it?
Thanks!
I think you mean FiddlerCore rather than Fiddler. Fiddler(Core) is a web proxy meaning it captures HTTP/HTTPS traffic; it won't capture traffic that uses other protocols (e.g. IRC, etc). To capture traffic from other protocols, you'll need a lower-level interception point (e.g. a Windows Firewall filter) which will capture everything, but it will not be able to decrypt HTTPS traffic, and parsing / modifying the traffic will prove MUCH harder.

how to redirect to other port in playframework

can anyone tell me how to redirect to a module using another port? example: redirect from http://localhost:9000 to https://localhost:9443/login
without changing ports, i'd just use #{Secure.login()} in the controller but i couldn't find any way to redirect to another port..?
The way you are meant to do this, for your example, is to the .secure() method. It was added in Play 1.0.3.2.
So it would look like
#{Secure.login().secure()}
This is a special method on the router object that changes the URL from HTTP to HTTPS. The last time I checked though, it didn't change the port. I raised a bug, but not sure if it is not fixed in the 1.2 master branch yet (https://github.com/playframework/play/blob/master/framework/src/play/mvc/Router.java).
The reason for this, is that play expects an HTTP server to sit infront of Play in a production environment, and handles HTTPS for you, and proxies through to Play as a simple HTTP request. The purpose of the .secure() is to tell the URL to switch to HTTPS, but still go through the same domain.
I don't think there are many alternatives (and none that are nice and simple).
You could take the Play source, and altering the Router.java file, so that it also changes the port number (in the secure method).
Or, you could write a FastTag that mimics the Router.reverse (effectively what the # symbol does), but replace the port number with a secure one.
As codemwnci explained, in prod, generally Play is behind a front proxy that manages all secured channel issues and which can also be used for balancing.
The #{Secure.login().secure()} should work but it only changes http to https.
In addition, I would add the dumb cludge that can be used in a Controller:
redirect("http://www.zenexity.fr:9876");
;)

RESTful PUT and DELETE and firewalls

In the classic "RESTful Web Services" book (O'Reilly, ISBN 978-0-596-52926-0) it says on page 251 "Some firewalls block HTTP PUT and DELETE but not POST."
Is this still true?
If it's true I have to allow overloaded POST to substitute for DELETE.
Firewalls blocking HTTP PUT/DELETE are typically blocking incoming connections (to servers behind the firewall). Assuming you have controls over the firewall protecting your application, you shouldn't need to worry about it.
Also, firewalls can only block PUT/DELETE if they are performing deep inspection on the network traffic. Encryption will prevent firewalls from analyzing the URL, so if you're using HTTPS (you are protecting your data with SSL, right?) clients accessing your web service will be able to use any of the standard four HTTP verbs.
Some 7 layer firewalls could analyze traffic to this degree. But I'm not sure how many places would configure them as such. You might check on serverfault.com to see how popular such a config might be (you could also always check with your IT staff)
I would not worry about overloading a POST to support a DELETE request.
HTML 4.0 and XHTML 1.0 only support GET and POST requests (via ) so it is commonplace to tunnel a PUT/DELETE via a hidden form field which is read by the server and dispathced appropriately. This technique preserves compatibility across browsers and allows you to ignore any firewall issues.
Ruby On Rails and .NET both handle RESTful requests in this fashion.
As an aside GET, POST, PUT & DELETE requests are fully supported through the XMLHttpRequest request object at present. XHTML 2.0 's officially supports GET, POST, PUT & DELETE as well.
You can configure a firewall to whatever you want ( at least in theory ) so don't be surprised if some sys admins do block HTTP PUT/DELETE.
The danger of HTTP PUT/DELETE is concerning some mis-configured servers: PUT replaces documents (and DELETE deletes them ;-) on the target server. So some sys admins decide up right to block PUT in case a crack is opened somewhere.
Of course we are talking about Firewalls acting at "layer 7" and not just at the IP layer ;-)