Magento 2 - Custom rest api endpoint for PDF download - net::ERR_HTTP2_PROTOCOL_ERROR 200 - rest

I have created custom rest api endpoint in magento for downloading pdf file.
When testing it locally: From separate react JS project I am sending request to that endpoint and file is downloaded successfuly. http/1.1 protocol is used (checked in google chrome dev tools, network tab).
After deployment to staging servers, when I try to make request between staging servers from reactJS project to magento2 project, then upon sending request it takes 30-40 seconds without getting any response and then error is shown in console. There is not anything in the error logs. http2 protocol is used (not sure if that can be a reason for issue).
Failed to fetch - net::ERR_HTTP2_PROTOCOL_ERROR 200
Here is the piece of php code for downloading pdf file:
...
$filename = $outputFileName . time() . '.' . $extension;
$directoryTmpWrite = $this->filesystem->getDirectoryWrite(DirectoryList::TMP);
$directoryTmpWrite->writeFile($filename, $fileContent);
return $this->fileFactory->create(
$outputFileName . '.' . $extension,
[
'type' => 'filename',
'value' => $filename,
'rm' => true,
],
DirectoryList::TMP, //basedir
'application/octet-stream',
''
);

How I resolved my issue?
I had to turn on output buffering in php in magento, to ensure that no output has been sent from a script and instead of that output has been store to a buffer and then sent all together from a script when script execution is finished.
So ob_start() fixed my issue. Right before return statement.

Related

Sudden Axios Error not properly sending parameters

April 26 2022 The system that we are working is suddenly not working, I found out that the request data from frontend to backend suddenly became weird.
The regular request
array ( 'email' => 'xxxxx+1#gmail.com', 'password' => 12341234, )
When running Axios:
When I tried to use Ajax, the request is the normal one. However we are using axios for the majority of the project. Is someone else having the same error?
using the script: https://unpkg.com/axios/dist/axios.min.js

Binance::API: can't connect to the exchange

I'm an algo-trader and Perl fan.
I want to create a client which connects to Binance Future Testnet and i decided to exploit the Binance API module developed for Perl.
Once the Binance::API module was installed (no errors or warnings occurred there) i dived into my script first lines of code as follows:
#!/bin/perl
use Binance::API;
#API di Binance-Testnet
my $api = Binance::API->new(
apiKey => 'my api',
secretKey => 'my secret key',
);
$api->account();
$api->exchange_info();
API KEY and SECRET KEY are taken from my Binance Future Testnet Account (freely available for all users), succesfully used via tradingview and its Pine script tool.
Unfortunatly i got the following error:
[Binance::API::Request::_exec] Unsuccessful request.
Status => 401,
Content => {"code":-2015,"msg":"Invalid API-key, IP, or permissions for action."} at C:/Strawberry/perl/site/lib/Binance/API/Request.pm line 107.
[Binance::API::Request::_exec] Unsuccessful request.
Status => 404,
Content => <html><body><h2>404 Not found</h2></body></html> at C:/Strawberry/perl/site/lib/Binance/API/Request.pm line 107.
Any idea on what went wrong with this? I don't want to use Python or C++ as I love Perl and its versatility.
If you look at Binance::API source code, you can see this module developed for Spot market. not Futures.
https://github.com/taskula/binance-perl-api/blob/master/lib/Binance/Constants.pm
BEGIN {
%constants = (
BASE_URL => $ENV{BINANCE_API_BASE_URL} || 'https://api.binance.com', #this endpoint is for spot
DEBUG => $ENV{BINANCE_API_DEBUG} || 0,
);
}
for Spot Testnet you can get API KEY from here:
https://binance-docs.github.io/apidocs/spot/en/#enabling-accounts
I think you may get confused with spot and futures. There are 4 different base URL for different markets:
Spot Production site: https://api.binance.com
Spot Testnet site: https://testnet.binance.vision
Futures Production site: https://fapi.binance.com
Futures Testnet site: https://testnet.binancefuture.com

Add headers in response in user plugin

When retrieving a file from Artifactory, I would like to return its properties as HTTP headers.
Attempting to add headers to the response of a file download in the altResponse code block of the download plugin does not appear to work as I thought it would.
With the following user plugin loaded, I can see the code being executed from the logs, however, the header is not included in the response (using curl to download the file)
import org.artifactory.repo.RepoPath
import org.artifactory.request.Request
download {
altResponse { Request request, RepoPath responseRepoPath ->
headers = ["ExtraHeader":"SpecialHeader"]
log.warn "adding header: $headers"
}
}
logs:
2018-05-07 17:28:04,969 [http-nio-8088-exec-4] [WARN ] (properties :7) - adding header: [ExtraHeader:SpecialHeader]
documentation:
https://www.jfrog.com/confluence/display/RTF/User+Plugins#UserPlugins-Download
Running artifactory plugin development locally (currently loads version 5.11.0)
https://github.com/JFrogDev/artifactory-user-plugins-devenv
Am I misunderstanding how headers is supposed to be used?

deploying cgi to psgi converted application in apache

#!C:/perl/bin/perl.exe
use CGI;
my $q = CGI->new;
print $q->header('text/plain'),
"Hello ", $q->param('name');
#CONVERTED PSGI PAGE
#!C:/perl/bin/perl.exe
use CGI::PSGI;
my $app = sub {
my $env = shift;
my $q = CGI::PSGI->new($env);
return [
$q->psgi_header('text/plain'),
[ "Hello ", $q->param('name') ],
];
};
I run this cgi.pl in apache server as
http://localhost/cgi-bin/cgi.pl
but I cant able to run the converted psgi.pl in apache server
its displaying
please help
Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator at admin#example.com to inform them of the time this error occurred, and the actions you performed just before this error. More information about this error may be available in the server error log.
CGI and PSGI are two different specifications of how a web server and an external program communicate.
Under CGI, the web server expects to receive text output from the program, consisting of the HTTP Response headers, a blank line, and the HTML rendered by the program.
The CGI module implements this logic for the apache server, and if the output from the program does not comply, apache reports the 500 error.
Under PSGI, the web server expects the program to return a three element list consisting of the HTTP response code, the HTTP Response headers, and the HTML rendered by the program.
So you can see that a program conforming to the PSGI spec would confuse the mod_cgi.
So you need to install an apache module that implements PSGI, or employ a Perl module (the CGI::PSGI docs suggest CGI::Emulate::PSGI ) that will accept your PSGI list and convert to CGI for you.

How to make browser stop caching GWT nocache.js

I'm developing a web app using GWT and am seeing a crazy problem with caching of the app.nocache.js file in the browser even though the web server sent a new copy of the file!
I am using Eclipse to compile the app, which works in dev mode. To test production mode, I have a virtual machine (Oracle VirtualBox) with a Ubuntu guest OS running on my host machine (Windows 7). I'm running lighttpd web server in the VM. The VM is sharing my project's war directory, and the web server is serving this dir.
I'm using Chrome as the browser, but the same thing happens in Firefox.
Here's the scenario:
The web page for the app is blank. Accorind to Chrome's "Inspect Element" tool, it's because it is trying fetch 6E89D5C912DD8F3F806083C8AA626B83.cache.html, which doesn't exist (404 not found).
I check the war directory, and sure enough, that file doesn't exist.
The app.nocache.js on the browser WAS RELOADED from the web server (200 OK), because the file on the server was newer than the browser cache. I verified that file size and timestamp for the new file returned by the server were correct. (This is info Chrome reports about the server's HTTP response)
However, if I open the app.nocache.js on the browser, the javascript is referring to 6E89D5C912DD8F3F806083C8AA626B83.cache.html!!! That is, even though the web server sent a new app.nocache.js, the browser seems to have ignored that and kept using its cached copy!
Goto Google->GWT Compile in Eclipse. Recompile the whole thing.
Verify in the war directory that the app.nocache.js was overwritten and has a new timestamp.
Reload the page from Chrome and verify once again that the server sent a 200 OK response to the app.nocache.js.
The browser once again tries to load 6E89D5C912DD8F3F806083C8AA626B83.cache.html and fails. The browser is still using the old cached copy of app.nocache.js.
Made absolutely certain in the war directory that nothing is referring to 6E89D5C912DD8F3F806083C8AA626B83.cache.html (via find and grep)
What is going wrong? Why is the browser caching this nocache.js file even when the server is sending it a new copy?
Here is a copy of the HTTP request/response headers when clicking reload in the browser. In this trace, the server content hasn't been recompiled since the last GET (but note that the cached version of nocache.js is still wrong!):
Request URL:http://192.168.2.4/xbts_ui/xbts_ui.nocache.js
Request Method:GET
Status Code:304 Not Modified
Request Headersview source
Accept:*/*
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Host:192.168.2.4
If-Modified-Since:Thu, 25 Oct 2012 17:55:26 GMT
If-None-Match:"2881105249"
Referer:http://192.168.2.4/XBTS_ui.html
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4
Response Headersview source
Accept-Ranges:bytes
Content-Type:text/javascript
Date:Thu, 25 Oct 2012 20:27:55 GMT
ETag:"2881105249"
Last-Modified:Thu, 25 Oct 2012 17:55:26 GMT
Server:lighttpd/1.4.31
The best way to avoid browser caching is set the expiration time to now and add the max-age=0 and the must-revalidate controls.
This is the configuration I use with apache-httpd
ExpiresActive on
<LocationMatch "nocache">
ExpiresDefault "now"
Header set Cache-Control "public, max-age=0, must-revalidate"
</LocationMatch>
<LocationMatch "\.cache\.">
ExpiresDefault "now plus 1 year"
</LocationMatch>
your configuration for lighthttpd should be
server.modules = (
"mod_expire",
"mod_setenv",
)
...
$HTTP["url"] =~ "\.nocache\." {
setenv.add-response-header = ( "Cache-Control" => "public, max-age=0, must-revalidate" )
expire.url = ( "" => "access plus 0 days" )
}
$HTTP["url"] =~ "\.cache\." {
expire.url = ( "" => "access plus 1 years" )
}
We had a similar issue. We found out that timestamp of the nocache.js was not updated with gwt compile so had to touch the file on build. And then we also applied the fix from #Manolo Carrasco MoƱino. I wrote a blog about this issue. http://programtalk.com/java/gwt-nocachejs-cached-by-browser/
We are using version 2.7 of GWT as the comment also points out.
There are two straightforward solutions (second is modified version of first one though)
1) Rename your *.html file which has a reference to *.nocache.js to i.e. MyProject.html to MyProject.jsp
Now search the location of you *.nocache.js script in MyProject.html
<script language="javascript" src="MyProject/MyProject.nocache.js"></script>
add a dynamic variable as a parameter for the JS file, this will make sure actual contents are being returned from the server every time. Following is example
<script language="javascript" src="MyProject/MyProject.nocache.jsp?dummyParam=<%= "" + new java.util.Date().getTime() %>"></script>
Explanation: dummyParam will be of no use BUT will get us our intended results i.e. will return us 200 code instead of 304
Note: If you will use this technique then you will need to make sure that you are pointing to right jsp file for loading your application (Before this change you was loading your app using HTML file).
2) If you dont want to use JSP solution and want to stick with your html file then you will need java script to dynamically add the unique parameter value on the client side when loading the nocache file. I am assuming that should not be a big deal now for you given the solution above.
I have used first technique successfully, hope this will help.
The app.nocache.js on the browser WAS RELOADED from the web server (200 OK), because the file on the server was newer than the browser cache. I verified that file size and timestamp for the new file returned by the server were correct. (This is info Chrome reports about the server's HTTP response)
I wouldn't rely on this. I've seen a bit of strange behaviour in Chrome's dev tools with the network tab in combination with caching (at least, it's not 100% transparent for me). In case of doubt, I usually still consult Firebug.
So probably Chrome still uses the old version. It may have decided long ago, that it will never have to reload the resource again. Clearing the cache should resolve this. And then make sure to set the correct caching headers before reloading the page, see e.g. Ideal HTTP cache control headers for different types of resources.
Open the page in cognito mode just to get-rid of cache issue and unblock yourself.
You need to configure cache time as mentioned in others comments.
After unsuccessfully preventing caching via Apache I created a bash script that root runs every minute in a cron job on my Linux Tomcat server.
#!/bin/bash
#
# Touches GWT nocache.js files in the Tomcat web app directory to prevent caching.
# Execute this script every minute in a root cron job.
#
cd /var/lib/tomcat7/webapps
find . -name '*nocache.js' | while read file; do
logger "Touching file '$file'"
touch "$file"
done