I'm trying to implement a basic chat application with Lift 3 and lift-ng (Angular). Mostly, this is working. However, I am getting this warning in my (server side) log:
[qtp1721931908-24] WARN
net.liftweb.http.ContentSecurityPolicyViolation - Got a content
security violation report we couldn't interpret:
'Full({"csp-report":{"document-uri":"http://localhost:8081/","referrer":"","violated-directive":"script-src
'unsafe-eval'
'self'","effective-directive":"script-src","original-policy":"default-src
'self'; img-src *; script-src 'unsafe-eval' 'self'; style-src 'self'
'unsafe-inline'; report-uri
/lift/content-security-policy-report","blocked-uri":"inline","status-code":200}})'.
What I want to know is how to track down what is causing that violation? I am pretty unclear what part of the code or binding might be surfacing that, or how to narrow it down without having to painstakingly comment out every single part of the codebase.
I can easily get rid of the violation by setting up this SecurityRule in my Boot.scala:
LiftRules.securityRules = () => {
SecurityRules(content = Some(ContentSecurityPolicy(
scriptSources = List(ContentSourceRestriction.Self,
ContentSourceRestriction.UnsafeInline,
ContentSourceRestriction.UnsafeEval)
)))
}
However, I'd like to avoid using the unsafe inlining/eval.
If you look in the developer console of your browser, it may contain a link to the inline code. e.g.
When I clicked on "hash:2:0" it took me to the html source of the page, line 2, character 0 which is exactly where the offending code was:
One thing to note is that a lot of plugins (e.g. lastpass) generate this alert. Be sure to disable all plugins while testing.
Related
We have an API that expects our own vendor specific content type for example application/vnd.xxxx.custom.custom-data+json but looking through the source code of REST.Client it seems to always default to one of the ContentTypes in REST.Types for example when assigning ctNone in my body request it will default to ctAPPLICATION_X_WWW_FORM_URLENCODED.
I've tried assigning the content type directly to the TRESTClient.ContentType property but that gets overwritten by the TRESTRequest.ContentType value. I've also added the custom content type as a parameter on TRESTRequest which does get recognised but still appends ctAPPLICATION_X_WWW_FORM_URLENCODED on the end causing an invalid mime type exception.
begin
APIClient := TRESTClient.Create(API_URL);
APIRequest := TRESTRequest.Create(nil);
try
JsonToSend := TStringStream.Create(strJson, TEncoding.UTF8);
APIClient.Accept := 'application/vnd.xxxx.custom.custom-data+json';
// Below line gets overwritten
APIClient.ContentType := 'application/vnd.xxxx.custom.custom-data+json';
APIRequest.Client := APIClient;
APIRequest.Resource := 'ENDPOINT_URL';
APIRequest.Accept := 'application/vnd.xxxx.custom.custom-data+json';
APIRequest.AddParameter(
'Content-Type',
'application/vnd.xxxx.custom.custom-data+json',
pkHTTPHEADER,
[poDoNotEncode]
); // This includes the custom CT in the request but appends the preset one as well so in this case ctAPPLICATION_X_WWW_FORM_URLENCODED when ctNone is set
APIRequest.AddBody(JsonToSend, ctNone);
APIRequest.Method := rmPost;
try
APIRequest.Execute;
except
on E: Exception do
ShowMessage('Error on request: '#13#10 + e.Message);
end;
finally
JsonToSend.Free;
end;
end;
To me I would expect there to be a scenario where if a content type has been provided in the header parameters that it would use the one specified rather than any of the preset ones. However, an API exception is raised because an unknown media type was provided. The API exception reads:
Invalid mime type "application/vnd.xxxx.custom.custom-data+json, application/x-www-form-urlencoded": Invalid token character ',' in token "vnd.xxxx.custom.custom-data+json, application/x-www-form-urlencoded"
My understanding is it's recognising my custom content type provided in the params but is also still appending one of the preset content types from REST.Types in that request header causing it to fail. I would expect it to send the body with request header of just application/vnd.xxxx.custom.custom-data+json excluding application/x-www-form-urlencoded.
Aparently TRestCLient trying to act too smart in your scenario. However there is a regular way around that. The key is:
to add single content to request body that must not be any of ctNone, ctMULTIPART_FORM_DATA or ctAPPLICATION_X_WWW_FORM_URLENCODED.
to override Content-Type using custom header value.
Sample code:
uses
System.NetConsts;
RESTClient1.BaseURL := 'https://postman-echo.com/post';
RESTRequest1.Method := rmPOST;
RESTRequest1.Body.Add('{ "some": "data" }', ctAPPLICATION_JSON);
RESTRequest1.AddParameter(sContentType, 'application/vnd.hmlr.corres.corres-data+json',
pkHTTPHEADER, [poDoNotEncode]);
RESTRequest1.Execute;
The response from echo service is:
{
"args":{
},
"data":{
"some":"data"
},
"files":{
},
"form":{
},
"headers":{
"x-forwarded-proto":"https",
"host":"postman-echo.com",
"content-length":"18",
"accept":"application/json, text/plain; q=0.9, text/html;q=0.8,",
"accept-charset":"UTF-8, *;q=0.8",
"content-type":"application/vnd.hmlr.corres.corres-data+json",
"user-agent":"Embarcadero RESTClient/1.0",
"x-forwarded-port":"443"
},
"json":{
"some":"data"
},
"url":"https://postman-echo.com/post"
}
Pay attention to echoed headers, especially Content-Type of course. I tested the sample in Delphi 10.2 Tokyo, so hopefully it will also work in XE8.
Edit
The behaviour you observe is a bug (RSP-14001) that was fixed in RAD Studio 10.2 Tokyo.
There are various ways to resolve that. To name a few:
Adapt your API to discard secondary mime type.
Change your client implementation to TNetHttpClient instead, if you can give up all additional benefits that TRestClient provides.
Upgrade to RAD Studio 10.2+.
Hack it! This option is however strongly discouraged, but it can help you better understand TRestClient implementation details.
The easiest way to hack it would be to patch method TCustomRESTRequest.ContentType (note we're talking about invariant with a single argument) to return ContentType of a parameter if its AParamsArray argument contains single parameter of kind pkREQUESTBODY. This would allow us to add body to request of type ctNone so that the patched method would return ctNone as well and this would effectively prevent appending another value to Content-Type header.
Another option would be to patch method TRESTHTTP.PrepareRequest to prefer custom Content-Type header before inferred content type of the request. This is BTW how the current implementation works after it was fixed in RAD Studio 10.2 Tokyo. This logic is also applied to other headers - Accept, Accept-Charset, Accept-Encoding, User-Agent. Patching method TRESTHTTP.PrepareRequest is slightly harder to achieve, because it has private visibility.
The hardest option would be patching TWinHTTPRequest.SetHeaderValue to discard secondary content type value. This is also the most dangerous one, because it would have impact to anything HTTP related (that relies on THTTPClient) in your application. It's also hard, however not impossible, to patch the class, because it's completely hidden in the implementation section of System.Net.HttpClient.Win.pas. This is a huge shame, because it also prevents you from creating custom subclasses. Maybe for a good reason .. who knows ;)
I created my own “404 Page not found” error page on a TYPO3 website and implemented it via the /typo3conf/LocalConfiguration.php as follows, using the page’s Speaking URL path:
return [
...
'FE' => [
...
'pageNotFound_handling' => '/page-not-found/',
]
]
Now when I call a non-existing page, the error page gets displayed but there is a 4-digit alphanumeric number (hexadecimal as far as I’ve seen by now) BEFORE the HTML source code and a “0” AFTER it. Example (the number in the beginning is different after most of the reloads):
37b3
<!DOCTYPE html>
...
</html>
0
When calling the error page URL itself the page is returned correctly without those numbers.
Having the RealURL extension activated or deactivated does not make a difference.
Thanks a lot in advance!
I added the full description from the install tool and I guess we might find the solution there.
How TYPO3 should handle requests for non-existing/accessible pages.
empty (default)
The next visible page upwards in the page tree is shown.
'true' or '1'
An error message is shown.
String
Static HTML file to show (reads content and outputs with correct headers), e.g. notfound.html or http://www.example.org/errors/notfound.html.
Prefix "REDIRECT:"
If prefixed with "REDIRECT:" it will redirect to the URL/script after the prefix.
Prefix "READFILE:"
If prefixed with "READFILE" then it will expect the remaining string to be a HTML file which will be read and outputted directly after having the marker "###CURRENT_URL###" substituted with REQUEST_URI and ###REASON### with reason text, for example: READFILE:fileadmin/notfound.html.
Prefix "USER_FUNCTION:"
If prefixed with "USER_FUNCTION:" a user function is called, e.g. USER_FUNCTION:fileadmin/class.user_notfound.php:user_notFound->pageNotFound where the file must contain a class user_notFound with a method pageNotFound() inside with two parameters $param and $ref.
What you configured:
You're passing a string, thus TYPO3 expects to find a file - which you don't have, because it's more like an URL.
From what you try to achieve I'd go with REDIRECT:/page-not-found/.
Thanks for pointing this one out btw, I will remove the string configuration from the core since it does not make sense to have more people trip into this pitfall.
In short: change the following line in the FE section of your LocalConfiguration.php:
'pageNotFound_handling' => '/your404page.html',
to
'pageNotFound_handling' => 'REDIRECT:/your404page.html',
Cause
The actual cause is a combination of chunked Content-Encoding and the TYPO3 not being able to decode that in some cases. In your case the page not found handler eventually uses GeneralUtility::getUrl() to retrieve the error page.
If you have [SYS][curlUse] enabled it will use cUrl to retrieve the page and there is no problem.
If you don't have [SYS][curlUse] enabled it will open a socket, read the headers and then read the rest of the body. If the webserver uses "chunked" Content-Encoding the body will contain blocks of data and each block starts with a line with the length in hexadecimal format. The content ends with an empty block (with of course a line with the length "0").
cUrl apparently knows how to decode chunked data.
getUrl() itself does not know how to handle chunked data and uses the content as is as the page content.
In TYPO3 8 LTS the guzzle library is used to handle HTTP requests. In the guzzle code I can't find anything about handling chunked data. Guzzle will check if the cUrl PHP extension is present and use that as preferred transport. In most installations cUrl is present and since this decodes chunked data automagically no problem is visible. I have to test guzzle with PHP that has cUrl disabled to see if the issue is also present in v8/master.
Workaround/solution
If the PHP extension cUrl is enabled in your installation you can simply set [SYS][curlUse] in the Install Tool. The numbers around the 404 page content will disappear.
i am currently trying to implement a „simple“ readonly CALDAV-interface for a system. But the synchronization protocol and the CALDAV-clients give me some headaches.
The main test client i use is the macos-calendar (sierra).
The initial handshake (DAV principle, calendar lookup) and inital load of data is working. I get some REPORT:calendar-query requests.
The issue is the incremental sync after initial load. There are two approaches:
Via WebSync-extension (REPORT:sync-collection and sync-token prop)
my main issue here is that provisioning the sync-token from the server is not trivial in my system. Changes and New data is not an issue, but physical deletion (not yet logged in the user context) and changes in the scope of group- and/or role-assignments. Maybe i need to consider to invalidate in complex cases the sync-token and let the client resetup without sync-collection?
A nasty workaround could be to retain the calendar item IDs send to the client and check on each request for their existence and responds if necessary with a not found per deleted/out of scope calendar item. But this would mean i store client-state on the server which doesnt sound right and might be error prone.
Via basic protocal synchronization (respond to REPORT:calendar-query and propfind (depth=1) requests no webdav-sync active)
this is also working already in principle for new and changed data. But the macos-calendar doesnt remove items which are not part the collection response (propfind with depth=1). According to the protocol the client should determine the deleted items and remove them, but it doesnt do it in my case. Any ideas here?
For my system currently it would be ideal to use this approach though the performance might be not the ideal one.
With ios-Calendar i face another issue:
Initial handshake is somehow working as the requests in the network are coming and are answered.
But than a MKCALENDAR request is coming (instead of a calendar-query or propfind for items) which answer with 403 as i also dont provide it in the Allow-header of the options response. the request looks like this:
MKCALENDAR /services/cal/_userid/220EDB4A-F00C-41C9-B78F-10781BBA77E4/ HTTP/1.1
Host: 127.0.0.1:8003
Content-Type: text/xml
User-Agent: iOS/10.0.1 (14A403) dataaccessd/1.0
<?xml version="1.0" encoding="UTF-8"?>
<B:mkcalendar xmlns:B="urn:ietf:params:xml:ns:caldav">
<A:set xmlns:A="DAV:">
<A:prop>
<B:calendar-free-busy-set>
<NO/>
</B:calendar-free-busy-set>
<D:calendar-order xmlns:D="http://apple.com/ns/ical/">1</D:calendar-order>
<A:displayname>Kalender</A:displayname>
<B:calendar-timezone>BEGIN:VCALENDAR
...deleted....
</B:calendar-timezone>
<B:supported-calendar-component-set>
<B:comp name="VEVENT"/>
</B:supported-calendar-component-set>
</A:prop>
</A:set>
</B:mkcalendar>
Nothing is happening afterwards.
Anyone experiencing this as well? Why ios-calendar tries to do a mkcalendar though i have a calendar-collection as resource-type?
With Thunderbird Lightning:
Initial handshake with the calendar-collection is working
A propfind-and multiget request for items is answered with iCal-Items.
But they are not displayed and in the error log i receive:
Warnung: CalDAV: Get failed: CalDAV: Error: got status 200 fetching calendar data for Debug Proxy, null
(text in german: error code: 0x80004005) Warnung: Fehler beim Lesen von Daten für Kalender: Debug Proxy. Allerdings ist dieser Fehler wahrscheinlich vernachlässigbar, daher versucht das Programm fortzufahren. Fehlercode: 0x80004005. Beschreibung: CalDAV: Error: got status 200 fetching calendar data for Debug Proxy, null
(text in german: error code: READ_FAILED) Warnung: Fehler beim Lesen von Daten für Kalender: Debug Proxy. Allerdings ist dieser Fehler wahrscheinlich vernachlässigbar, daher versucht das Programm fortzufahren. Fehlercode: READ_FAILED. Beschreibung:
http channel Listener OnDataAvailable contract violation
a similiar response is though working in macos-calendar – could it be some encoding issue?
Any hints are highly appreciated!
This is indeed a pretty broad question. But let me try to address some stuff:
Via WebSync-extension (REPORT:sync-collection and sync-token prop) my main issue here is that provisioning the sync-token from the server is not trivial in my system
Even if it is hard for you, you should really try to come up with something here. Even if this means storing some extra info on the server. Sync-collection is way more efficient.
(Idea: Maybe you can at least set a flag when something actually got deleted and only then expire the sync-token?)
Via basic protocal synchronization (respond to REPORT:calendar-query and propfind (depth=1))
Which one, calendar-range-query or PROPFIND? Completely different things ...
this is also working already in principle for new and changed data. But the macos-calendar doesnt remove items which are not part the collection response (propfind with depth=1).
If we are talking about a calendar-range-query, the client cannot proactively delete items since it doesn't know whether they just left the range (vs being deleted).
With PROPFIND it should do this. If you have proof it doesn't, maybe create another question with all the relevant details.
With ios-Calendar i face another issue: ... a MKCALENDAR request is coming ...
This probably means that it can't find the default scheduling calendar, no calendar at all, none with a proper component-type property. Or all the same for todos (Reminders app, same account). What is the payload of the MKCALENDAR?
Hard to diagnose w/o details, if you can't figure it out, ask a specific question on this with all the relevant details included (e.g. the XML you send in response to the home query).
Thunderbird Lightning
Can't say much about this, probably depends a lot on the version and what extensions you are using. AFAIK many people use the ScalableOGo Thunderbird extensions to get proper Cal/CardDAV with Thunderbird.
For Thunderbird/Lightning you may want to turn on calendar.debug.log and calendar.debug.log.verbose in the advanced config editor and restart. You can find it in Options > Advanced > General > Config Editor. This will get you more detailed http requests and information about what failed. You can also hook up the remote debugger and look at the network monitor, or set breakpoints in the code.
With Thunderbird/Lightning please note that we are using a mix of previous and current versions of the webdav-sync draft. I can't say much from the error message as is given it is very general, but it does look like there is something unexpected in the results.
Maybe it makes sense to compare the handshake between an existing server (like sabre/dav) and the client, then see where the difference between your communication and theirs is.
Also, you may be interested in the CalDAVTester from Apple, which checks server interoperability. Note however that it does contain various apple specific tests. The folks at CalConnect are working together with Apple to make it more generally usable and to split out the Apple-specific tests. Given your server is read-only, don't expect everything to work, but you can hunt for fixing specific tests.
Upon executing an HTTP Get request, I receive the following error:
2015/08/30 16:42:09 Get https://en.wikipedia.org/wiki/List_of_S%26P_500_companies:
stopped after 10 redirects
In the following code:
package main
import (
"net/http"
"log"
)
func main() {
response, err := http.Get("https://en.wikipedia.org/wiki/List_of_S%26P_500_companies")
if err != nil {
log.Fatal(err)
}
}
I know that according to the documentation,
// Get issues a GET to the specified URL. If the response is one of
// the following redirect codes, Get follows the redirect, up to a
// maximum of 10 redirects:
//
// 301 (Moved Permanently)
// 302 (Found)
// 303 (See Other)
// 307 (Temporary Redirect)
//
// An error is returned if there were too many redirects or if there
// was an HTTP protocol error. A non-2xx response doesn't cause an
// error.
I was hoping that somebody knows what the solution would be in this case. It seems rather odd that this simple url results in more than ten redirects. Makes me think that there may be more going on behind the scenes.
Thank you.
As others have pointed out, you should first give thought to why you are encountering so many HTTP redirects. Go's default policy of stopping at 10 redirects is reasonable. More than 10 redirects could mean you are in a redirect loop. That could be caused outside your code. It could be induced by something about your network configuration, proxy servers between you and the website, etc.
That said, if you really do need to change the default policy, you do not need to resort to editing the net/http source as someone suggested.
To change the default handling of redirects you will need to create a Client and set CheckRedirect.
For your reference:
http://golang.org/pkg/net/http/#Client
// If CheckRedirect is nil, the Client uses its default policy,
// which is to stop after 10 consecutive requests.
CheckRedirect func(req *Request, via []*Request) error
I had this issue with Wikipedia URLs containing %26 because they redirect to a version of the URL with & which Go then encodes to %26 which Wikipedia redirects to & and ...
Oddly, removing gcc-go (v1.4) from my Arch box and replacing it with go (v1.5) has fixed the problem.
I'm guessing this can be put down to the changes in net/http between v1.4 and v1.5 then.
I try get frequency from element audio with src is a url
var aud = document.getElementById("audio-player");
var canvas, ctx, source, context, analyser, fbc_array;
function initMp3Player(){
try {
context = new (window.AudioContext || window.webkitAudioContext)();
} catch(e) {
throw new Error('The Web Audio API is unavailable');
}
analyser = context.createAnalyser(); // AnalyserNode method
analyser.smoothingTimeConstant = 0.6;
analyser.fftSize = 512;
canvas = document.getElementById('canvas_up');
ctx = canvas.getContext('2d');
source = context.createMediaElementSource(aud);
source.crossOrigin = 'anonymous';
source.connect(analyser);
analyser.connect(context.destination);
frameLooper();
}
function frameLooper(){
window.requestAnimationFrame(frameLooper);
fbc_array = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(fbc_array);
console.log(fbc_array);
var gradient = ctx.createLinearGradient(0,0,0,300);
gradient.addColorStop(1,'#000000');
gradient.addColorStop(0.65,'#000000');
gradient.addColorStop(0.55,'#FF0000');
gradient.addColorStop(0.25,'#FFCC00');
gradient.addColorStop(0,'#ffffff');
if(fbc_array != null){
ctx.clearRect(0, 0, canvas.width, canvas.height);
}
ctx.fillStyle = gradient; // Color of the bars
for (var i = 0; i < (fbc_array.length); i++ ){
var value = -(fbc_array[i]/4);
ctx.fillRect(i*5,canvas.height,4,value*2);
}
}
window.addEventListener("load", initMp3Player, false);
and HTML:
<audio id="audio-player"><source src="" type="audio/mpeg"></audio>
but I receive error:
MediaElementAudioSource outputs zeroes due to CORS access restrictions for ...
I searched very much but i receive a good answer and detail. I'm not really good english, so very super if answers have demo ... thanks
I just find this problem, and mad with the Message:MediaElementAudioSource outputs zeroes due to CORS access restrictions for. But it's just a message, i can still hear the audio.
And I googled lots of this, think this link will be helpful:http://www.codingforums.com/javascript-programming/342454-audio-api-js.html
The createMediaElementSource method should create an object that uses the MediaElementAudioSourceNode interface. Such objects are subject to Cross-Origin Resource Sharing (CORS) restrictions based on the latest draft of the Web Audio API spec. (Note that this restriction doesn't appear to be in the outdated W3C version of the spec.) According to the spec, silence should be played when CORS restrictions block access to a resource, which would explain the "outputs zeroes" message; presumably, zero is equivalent to no sound.
To lift the restriction, the owner of the page at
http://morebassradio.no-ip.org:8214/;stream/1 would need to configure
their server to output an Access-Control-Allow-Origin header with
either a list of domains (including yours) or the * value to lift it
for all domains. Given that this stream appears to already be
unrestricted, public-facing content, maybe you can convince the owners
to output that header. You can test whether the header is being sent
by pressing Ctrl+Shift+Q in Firefox to open the Network panel, loading
the stream through the address bar, and then inspecting the headers
associated with that HTTP request in the Network panel.
Note that they can't use a meta element here since the audio stream
is, obviously, not an HTML document; that technique only works for
HTML and XHTML documents.
(While you're messing with Firefox panels, you may want to make sure
Security errors and warnings are enabled (by clicking the Security
button or its arrow) in the Console panel (Ctrl+Shift+K). I'm not sure
if there's a corresponding CORS message in Firefox like in Chrome, but
there might be. I wasted a bunch of time wondering why a page wasn't
working one day while troubleshooting a similar technology, Content
Security Policy (CSP), only to find that I had the relevant Firefox
messages hidden.)
You shouldn't need to mess with the crossorigin property/attribute
unless you set crossorigin = "use-credentials" (JavaScript) or
crossorigin="use-credentials" (HTML) somewhere, but you probably
didn't do that because that part of the HTML spec isn't finalized yet,
and it would almost certainly cause your content to "break" after
doing so since credentials would be required at that point.
I'm not familiar with the Web Audio API, so I wasn't able to figure
out how to output a MediaElementAudioSourceNode and trigger an error
message for my own troubleshooting. If I use createMediaElementSource
with an HTMLMediaElement (HTMLAudioElement), the result doesn't seem
to be a MediaElementAudioSourceNode based on testing using the
instanceof operator even though the spec says it should be if I'm
reading it right.
Then in my situation, i get the HTTP response Header:
HTTP/1.1 206 Partial Content
Date: Thu, 02 Jun 2016 06:50:43 GMT
Content-Type: audio/mpeg
Accept-Ranges: bytes
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: X-Log, X-Reqid
Access-Control-Max-Age: 2592000
Content-Disposition: inline; filename="653ab5685893b4bf.mp3"
Content-Transfer-Encoding: binary
Last-Modified: Mon, 16 May 2016 02:00:05 GMT
Server: nginx
Cache-Control: public, max-age=31536000
ETag: "FpGQqtcf_s2Ce8W_4Mv6ZqSVkVTK"
X-Log: mc.g;IO:2/304
X-Reqid: 71cAAFQgUBiJMVQU
X-Qiniu-Zone: 0
Content-Range: bytes 0-1219327/1219328
Content-Length: 1219328
Age: 1
X-Via: 1.1 xinxiazai211:88 (Cdn Cache Server V2.0), 1.1 hn13:8 (Cdn Cache Server V2.0)
Connection: keep-alive
Note that "Access-Control-Allow-Origin: *", i think this just the right thing, but i still get the message. Hope it help you.
This is correct. You can't access media from a different domain in Web Audio without CORS enabled on the media server (and making the appropriate CORS request.) This is to prevent cross-domain information attacks.
I was running into this problem when I would develop my application by opening the index.html file in my browser. A server was required in order to use the audio files I needed.
I installed the Live Server extension on Visual Studio Code - one of many ways to solve this.