When I try to use:
temp = new createjs.Bitmap(obj.path)
I get error:
Uncaught An error has occurred. This is most likely due to security restrictions on reading canvas pixel data with local or cross-domain images.
This is somewhat related to this question. Except in the above case user isn't using a web-server.
In my case I'm using a web-server but I need the images to be hosted on an AWS S3 bucket using https.
Is there a setting that I can use to allow CORS with EaselJs?
CORS Settings on S3 Bucket:
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Edit: related question
Bitmap does not handle CORS, since you must define a cross-origin property on your image directly before setting the src.
The best way to do this is to create the image yourself, and then apply cross-origin, before passing it to Bitmap.
var image = document.createElement("img");
image.crossOrigin = "Anonymous"; // Should work fine
image.src = obj.path;
temp = new Bitmap(image);
There is currently no API on EaselJS to pass-through the crossOrigin, so this is the only way.
If you preload your EaselJS content with PreloadJS, you can pass the crossOrigin property on the queue, or any load item.
var queue = new createjs.LoadQueue(false, null, true); //3rd param
queue.loadFile("path/to/bmp.png"); // Will use the queue CORS
queue.loadFile({src:"path/to/bmp.png", crossOrigin:true, id:"image"}); // For one file only (you can leave out the param on LoadQueue)
// later
var bmp = new createjs.Bitmap(queue.getResult("image");
Note that the reason you get CORS errors even if the image displays is that EaselJS uses pixel fill to determine hit testing. You can also get around this by putting hitAreas on your images, which are used instead of the image pixels when doing hit testing:
var temp = new Bitmap(obj.path);
temp.image.onload = function() {
var s = temp.hitArea = new createjs.Shape();
s.graphics.fill("red").drawRect(0,0,temp.image.width,temp.image.height);
}
Cheers,
Yes, Enable CORS on your AWS S3 bucket under Permissions.
Related
In my app, I already integrated with google login and I successfully accessed my google drive folder and video files(mp4). But better player can play public video by modified url like this https://drive.google.com/uc?id=my_video_file_id.
I want to develop an app like Nplayer or Kudoplayer.
Can anyone guide me some scenario of Nplayer or Kudoplayer? and guide me how to play private drive's video using better player. Thanks.
Finally I got solution after research for a long time :-)
Summary answer for this question
Don't forget to add headers in BetterPlayerDataSource
Don't use this format "https://drive.google.com/uc?export=view&id=${fileId}" and
Use this format "https://www.googleapis.com/drive/v3/files/${fileId}?alt=media"
For guys who face issue like me, please follow these step
Enable Drive API
Integrate google login in your app with appropriate Scpoe, for me driveReadonlyScope is enough.
Retrieve fileId
Next step that I stuck is
Don't forget to add headers in BetterPlayerDataSource
Don't use this format "https://drive.google.com/uc?export=view&id=${fileId}" and
Use this format "https://www.googleapis.com/drive/v3/files/${fileId}?alt=media"
Like this
BetterPlayerDataSource betterPlayerDataSource = BetterPlayerDataSource(
BetterPlayerDataSourceType.network,
"https://www.googleapis.com/drive/v3/files/$fileId?alt=media",
videoExtension: ".mp4",
headers: getDataFromCache(CacheManagerKey.authHeader),
);
authHeader can get when you call signWithGoogle function,
I save it and retrieve later. In signWithGoogle function, you can get authHeader using this
Map<String, String> authHeader = await googleSignIn.currentUser!.authHeaders;
Bonus - Queries For You.
Get Share With Me Data = > q: "'sharedWithMe = true and (mimeType = 'video/mp4' or mimeType = 'application/vnd.google-apps.folder')"
Get Share Drive Data => q: "'shared_drive_id' in parents and trashed=false and (mimeType = 'video/mp4' or mimeType = 'application/vnd.google-apps.folder')",
Get My Drive Data => q: "'root' in parents and trashed=false and (mimeType = 'video/mp4' or mimeType = 'application/vnd.google-apps.folder')",
that's tricky.
I would propably try to attach approprieate headers so google drive would allow me to reach file using BetterPlayerDataSource class.
but being curious and checking how google would want it to be made... I got into spiral of google docs and I found this
https://cloud.google.com/blog/products/identity-security/enhancing-security-controls-for-google-drive-third-party-apps
Keep in mind that downloading file and streaming also differ. So if you want for your player to not lag you should stream.
I've got a site that accepts file uploads which are sent as multipart/form-data within a POST request. To verify that the upload, which shows the filename afterwards, is secured against XSS I want to upload a file which contains HTML Tags in the filename.
This is actually harder than I expected. I can't create a file containing < on my filesystem (Windows). Also, I don't know a way to change the filename of the file input element inside the DOM before the upload (which is what I would do with normal/hidden inputs). So I thought about editing the POST body before it's uploaded, but I don't know how. Popular extensions (I recall Tamper Data, Tamper Dev) only let me change headers. I guess this is due to the plugin system of Chrome, which is the Browser I use.
So, what's the simplest way of manipulating the POST requests body? I could craft the entire request using cUrl, but I also need state, lots of additional parameters and session data etc. which gets quite complex... A simple way within the Browser would ne nice.
So, while this is not a perfect solution, it is at least a way to recreate and manipulate the form submit using FormData and fetch. It is not as generic as I'd like it to be, but it works in that case. Just use this code in the devtools to submit the form with the altered filename:
let formElement = document.querySelector('#idForm'); // get the form element
let oldForm = new FormData(formElement);
let newForm = new FormData;
// copy the FormData entry by entry
for (var pair of oldForm.entries()) {
console.log(pair[0]+': '+pair[1]);
if(typeof(pair[1]) == 'object' && pair[1].name) {
// alter the filename if it's a file
newForm.append(pair[0],pair[1],'yourNewFilename.txt');
} else {
newForm.append(pair[0],pair[1]);
}
}
// Log the new FormData
for (var pair of newForm.entries()) {
console.log(pair[0]+': ');
console.log(pair[1]);
}
// Submit it
fetch(formElement.action, {
method: formElement.method,
body: newForm
});
I'd still appreciate other approaches.
I'm sending an email using phpmailer. I have web service to generate pdf. This pdf is not uploading or downloading to anywhere.
PDF url is like
http://mywebsite/webservices/report/sales_invoice.php?company=development&sale_id=2
I need to attach this dynamic pdf url to my email.
My email sending service url is like
http://mywebsite/webservices/mailservices/sales_email.php
Below is the code which i am using to attach the pdf.
$pdf_url = "../report/sales_invoice.php?company=development&sale_id=2";
$mail->AddAttachment($pdf_url);
Sending message is working but pdf doesn't attached. It gives below message.
Could not access file: ../report/sales_invoice.php?company=development&sale_id=2
I need some help
To have the answer right here:
As phpmailer would not auto-fetch the remote content, you need to do it yourself.
So you go:
// we can use file_get_contents to fetch binary data from a remote location
$url = 'http://mywebsite/webservices/report/sales_invoice.php?company=development&sale_id=2';
$binary_content = file_get_contents($url);
// You should perform a check to see if the content
// was actually fetched. Use the === (strict) operator to
// check $binary_content for false.
if ($binary_content === false) {
throw new Exception("Could not fetch remote content from: '$url'");
}
// $mail must have been created
$mail->AddStringAttachment($binary_content, "sales_invoice.pdf", $encoding = 'base64', $type = 'application/pdf');
// continue building your mail object...
Some other things to watch out for:
Depending on the server response time, your script might run into timing issues. Also, the fetched data might be pretty large and could cause php to exceed its memory allocation.
I need to upload to multiple s3 buckets with filepicker.io. I found a tweet that indicated that there was a hacky, but possible, way to do this. Support hasn't gotten back to me yet, so I'm hoping that someone here already knows the answer!
Have you tried generating a second application/API key? It looks like they lock your S3/AWS credentials to an application/API key rather than directly to the account.
Support just got back to me. There's no way to do this besides creating multiple applications, which is okay if you are just switching between prod/staging/dev, but not a good solution if you have to upload to arbitrary buckets.
My solution is to execute a PUT request with the x-amz-copy-source header after the file has been uploaded, which copies it to the correct bucket.
This is pretty hacky as it request two extra requests per file -- one filepicker.stat and one more call to s3 (or your server).
#Ben
I am developing code with same issue of files needing to go into many buckets. I'm working in ASP.net.
What I have done is have one Filepicker 'application' with it's own S3 bucket.
I already had a callback to the server in the javascript onSuccess() function (which is passed as a parameter to filepicker.store()). This callback needed to be there to do some book-keeping anyway.
So I have just added in an extra bit to the server-side callback code which uses the AWS SDK to copy the object from the bucket filepicker uploades it to, to it's final destination bucket.
This is my C# code for moving, or rather copying, an object between buckets:
public bool MoveObject(string bucket1, string key1, string bucket2, string key2 = null)
{
bool success = false;
if (key2 == null) key2 = key1;
Logger logger = new Logger(); // my logging system
try
{
RegionEndpoint region = RegionEndpoint.EUWest1; // use your region here
using (AmazonS3Client s3Client = new AmazonS3Client(region))
{
// TODO: CheckForBucketFunction
CopyObjectRequest request = new CopyObjectRequest();
request.SourceBucket = bucket1;
request.SourceKey = key1;
request.DestinationBucket = bucket2;
request.DestinationKey = key2;
S3Response response = s3Client.CopyObject(request);
logger.Info2Log("response xml = \n{0}\n", response.ResponseXml);
response.Dispose();
success = true;
}
}
catch (AmazonS3Exception ex)
{
logger.Info2Log("Error copying file between buckets: {0} - {1}",
ex.ErrorCode, ex.Message);
success = false;
}
return success;
}
There are AWS SDKs for other server languages and the good news is Amazon doesn't charge for copying objects between buckets in the same region.
Now I just have to decide how to delete the object from the filepicker application bucket. I could do it on the server using more AWS SDK code but that will be messy as it leaves links to the object in the filepicker console. Or I could do it from the browser using filepicker code.
What I'm Trying To Do
I'm trying to create a solution of any kind that will run nightly on a Windows server, authenticate to a website, check a web page on the site for new links indicating a new version of a zip file, use new links (if present) to download a zip file, unzip the downloaded file to an existing folder on the server, use the unzipped contents (sql scripts, etc.) to build an instance of a database, and log everything that happens to a text file.
Forms App: The Part That Sorta Works
I created a Windows Forms app that uses a couple of WebBrowser controls, a couple of threads, and a few timers to do all that except the running nightly. It works great as a Form when I'm logged in and run it, but I need to get it (or something like it) to run on it's own like a Service or scheduled task.
My Service Attempt
So, I created a Windows Service that ticks every hour and, if the System.DateTime.Now.Hour >= 22, attempts to launch the Windows Forms app to do it's thing. When the Service attempts to launch the Form, this error occurs:
ActiveX control '8856f961-340a-11d0-a96b-00c04fd705a2' cannot be instantiated because the current thread is not in a single-threaded apartment.
which I researched and tried to resolve by either placing the [STAThread] attribute on the Main method of the Service's Program class or using some code like this in a few places including the Form constructor:
webBrowseThread = new Thread(new ThreadStart(InitializeComponent));
webBrowseThread.SetApartmentState(ApartmentState.STA);
webBrowseThread.Start();
I couldn't get either approach to work. In the latter approach, the controls on the Form (which would get initialized inside IntializeComponent) don't get initialized and I get null reference exceptions.
My Scheduled Task Attempt
So, I tried creating a nightly scheduled task using my own credentials to run the Form locally on my dev machine (just testing). It gets farther than the Service did, but gets hung up at the File Download Dialog.
Related Note: To send the key sequences to get through the File Download and File Save As dialogs, my Form actually runs a couple of vbscript files that use WScript.Shell.SendKeys. Ok, that's embarassing to admit, but I tried a few different things including SendMessage in Win32 API and referencing IWshRuntimeLibrary to use SendKeys inside my C# code. When I was researching how to get through the dialogs, the Win32 API seemed to be the recommended way to go, but I couldn't figure it out. The vbscript files was the only thing I could get to work, but I'm worried now that this may be the reason why a scheduled task won't work.
Regarding My Choice of WebBrowser Control
I have read about the System.WebClient class as an alternative to the WebBrowser control, but at a glance, it doesn't look like it has what I need to get this done. For example, I needed (or I think I needed) the WebBrowser's DocumentCompleted and FileDownload events to handle the delays in pages loading, files downloading, etc. Is there more to WebClient that I'm not seeing? Is there another class besides WebBrowser that is more Service-friendly and would do the trick?
In Summary
Geez, this is long. Sorry! It would help to even have a high level recommendation for a better way to do what I'm trying to do, because nothing I've tried has worked.
Update 10/22/09
Well, I think I'm closer, but I'm stuck again. I should end up with a decent-sized zip file with several files in it, but the zip file resulting from my code is empty. Here's my code:
// build post request
string targetHref = "http://wwwcf.nlm.nih.gov/umlslicense/kss/login.cfm";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(targetHref);
request.Method = "POST";
request.ContentType = "application/x-www-form-urlencoded";
// encoding to use
Encoding enc = Encoding.GetEncoding(1252);
// build post string containing authentication information and add to post request
string poststring = "returnUrl=" + fixCharacters(targetDownloadFileUrl);
poststring += getUsernameAndPasswordString();
poststring += "&login2.x=0&login2.y=0";
// convert to required byte array
byte[] postBytes = enc.GetBytes(poststring);
request.ContentLength = postBytes.Length;
// write post to request
Stream postStream = request.GetRequestStream();
postStream.Write(postBytes, 0, postBytes.Length);
postStream.Close();
// get response as stream
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream responseStream = response.GetResponseStream();
// writes stream to zip file
FileStream writeStream = new FileStream(fullZipFileName, FileMode.Create, FileAccess.Write);
ReadWriteStream(responseStream, writeStream);
response.Close();
responseStream.Close();
The code for ReadWriteStream looks like this.
private void ReadWriteStream(Stream readStream, Stream writeStream)
{
// taken verbatum from http://www.developerfusion.com/code/4669/save-a-stream-to-a-file/
int Length = 256;
Byte[] buffer = new Byte[Length];
int bytesRead = readStream.Read(buffer, 0, Length);
// write the required bytes
while (bytesRead > 0)
{
writeStream.Write(buffer, 0, bytesRead);
bytesRead = readStream.Read(buffer, 0, Length);
}
readStream.Close();
writeStream.Close();
}
The building of the post string is taken from my previous forms app that works. I compared the resulting values in poststring for both sets of code (my working forms app and this one) and they're identical.
I'm not even sure how to troubleshoot this further. Anyone see anything obvious as to why this isn't working?
Conclusion 10/23/09
I finally have this working. A couple of important hurdles I had to get over. I had some problems with the ReadWriteStream method code that I got online. I don't know why, but it wasn't working for me. A guy named JB in Claudio Lassala's Virtual Brown Bag meeting helped me to come up with this code which worked much better for my purposes:
private void WriteResponseStreamToFile(Stream responseStreamToRead, string zipFileFullName)
{
// responseStreamToRead will contain a zip file, write it to a file in
// the target location at zipFileFullName
FileStream fileStreamToWrite = new FileStream(zipFileFullName, FileMode.Create);
int readByte = responseStreamToRead.ReadByte();
while (readByte != -1)
{
fileStreamToWrite.WriteByte((byte)readByte);
readByte = responseStreamToRead.ReadByte();
}
fileStreamToWrite.Flush();
fileStreamToWrite.Close();
}
As Will suggested below, I did have trouble with the authentication. The following code is what worked to get around that issue. A few comments inserted addressing key issues I ran into.
string targetHref = "http://wwwcf.nlm.nih.gov/umlslicense/kss/login.cfm";
HttpWebRequest firstRequest = (HttpWebRequest)WebRequest.Create(targetHref);
firstRequest.AllowAutoRedirect = false; // this is critical, without this, NLM redirects and the whole thing breaks
// firstRequest.Proxy = new WebProxy("127.0.0.1", 8888); // not needed for production, but this helped in order to debug the http traffic using Fiddler
firstRequest.Method = "POST";
firstRequest.ContentType = "application/x-www-form-urlencoded";
// build post string containing authentication information and add to post request
StringBuilder poststring = new StringBuilder("returnUrl=" + fixCharacters(targetDownloadFileUrl));
poststring.Append(getUsernameAndPasswordString());
poststring.Append("&login2.x=0&login2.y=0");
// convert to required byte array
byte[] postBytes = Encoding.UTF8.GetBytes(poststring.ToString());
firstRequest.ContentLength = postBytes.Length;
// write post to request
Stream postStream = firstRequest.GetRequestStream();
postStream.Write(postBytes, 0, postBytes.Length); // Fiddler shows that post and response happen on this line
postStream.Close();
// get response as stream
HttpWebResponse firstResponse = (HttpWebResponse)firstRequest.GetResponse();
// create new request for new location and cookies
HttpWebRequest secondRequest = (HttpWebRequest)WebRequest.Create(firstResponse.GetResponseHeader("location"));
secondRequest.AllowAutoRedirect = false;
secondRequest.Headers.Add(HttpRequestHeader.Cookie, firstResponse.GetResponseHeader("Set-Cookie"));
// get response to second request
HttpWebResponse secondResponse = (HttpWebResponse)secondRequest.GetResponse();
// write stream to zip file
Stream responseStreamToRead = secondResponse.GetResponseStream();
WriteResponseStreamToFile(responseStreamToRead, fullZipFileName);
responseStreamToRead.Close();
sl.logScriptActivity("Downloading update.");
firstResponse.Close();
I want to underscore that setting AllowAutoRedirect to false on the first HttpWebRequest instance was critical to the whole thing working. Fiddler showed two additional requests that occurred when this was not set, and it broke the rest of the script.
You're trying to use UI controls to do something in a windows service. This will never work.
What you need to do is just use the WebRequest and WebResponse classes to download the contents of the webpage.
var request = WebRequest.Create("http://www.google.com");
var response = request.GetResponse();
var stream = response.GetResponseStream();
You can dump the contents of the stream, parse the text looking for updates, and then construct a new request for the URL of the file you want to download. That response stream will then have the file, which you can dump on the filesystem and etc etc.
Before you wonder, GetResponse will block until the response returns, and the stream will block as data is being received, so you don't need to worry about events firing when everything has been downloaded.
You definitely need to re-think your approach (as you've already begun to do) to eliminate the Forms-based application approach. The service you're describing needs to operate with no UI at all.
I'm not familiar with the details of System.WebClient, but since it
provides common methods for sending
data to and receiving data from a
resource identified by a URI,
it will probably be your answer.
At first glance, WebClient.DownloadFile(...) or WebClient.DownloadFileAsync(...) will do what you need.
The only thing I can add is that once you have scraped your screen and have the fully qualified name of the file you want to download, you could pass it along to the Windows/DOS command 'get' which will fetch files via HTTP. You can also script a command-line FTP client if desired. It's been a long time since I tried something like this in Windows, but I think you're almost there. Once you have fetched the correct file, building a batch file to do everything else should be pretty easy. If you are more comfortable with Unix, google "unix services for windows" just keep an eye on the services they start running (DHCP, etc). There are some nice utilities which will let your treat dos as a unix-like shell (ls -l, grep, etc) Finally, you could try another language like Perl or Python but I don't think that's the kind of advice you were looking for. :)