Is there a way to tell django compressor to create source maps - django-compressor

I want to be able to debug minified compressed javascript code on my production site. Our site uses django compressor to create minified and compressed js files. I read recently about chrome being able to use source maps to help debug such javascript. However I don't know how/if possible to tell the django compressor to create source maps when compressing the js files

I don't have a good answer regarding outputting separate source map files, however I was able to get inline working.
Prior to adding source maps my settings.py file used the following precompilers
COMPRESS_PRECOMPILERS = (
('text/coffeescript', 'coffee --compile --stdio'),
('text/less', 'lessc {infile} {outfile}'),
('text/x-sass', 'sass {infile} {outfile}'),
('text/x-scss', 'sass --scss {infile} {outfile}'),
('text/stylus', 'stylus < {infile} > {outfile}'),
)
After a quick
$ lessc --help
You find out you can put the less and map files in to the output css file. So my new text/less precompiler entry looks like
('text/less', 'lessc --source-map-less-inline --source-map-map-inline {infile} {outfile}'),
Hope this helps.
Edit: Forgot to add, lessc >= 1.5.0 required for this, to upgrade use
$ [sudo] npm update -g less

While I couldn't get this to work with django-compressor (though it should be possible, I think I just had issues getting the app set up correctly), I was able to get it working with django-assets.
You'll need to add the appropriate command-line argument to the less filter source code as follows:
diff --git a/src/webassets/filter/less.py b/src/webassets/filter/less.py
index eb40658..a75f191 100644
--- a/src/webassets/filter/less.py
+++ b/src/webassets/filter/less.py
## -80,4 +80,4 ## class Less(ExternalTool):
def input(self, in_, out, source_path, **kw):
# Set working directory to the source file so that includes are found
with working_directory(filename=source_path):
- self.subprocess([self.less or 'lessc', '-'], out, in_)
+ self.subprocess([self.less or 'lessc', '--line-numbers=mediaquery', '-'], out, in_)
Aside from that tiny addition:
make sure you've got the node -- not the ruby gem -- less compiler (>=1.3.2 IIRC) available in your path.
turn on the sass source-maps option buried away in chrome's web inspector config pages. (yes, 'sass' not less: less tweaked their debug-info format to match sass's since since sass had already implemented a chrome-compatible mapping and their formats weren't that different to begin with anyway...)

Not out of the box but you can extend a custom filter:
from compressor.filters import CompilerFilter
class UglifyJSFilter(CompilerFilter):
command = "uglifyjs -c -m " /
"--source-map-root={relroot}/ " /
"--source-map-url={name}.map.js" /
"--source-map={relpath}/{name}.map.js -o {output}"

Related

Error with static compilation Qt with postgresql driver

I have installed through Mainteinance Tool Qt 5.12.5 and the sources. I have the next directories:
C:\Qt\5.12.5\Src
C:\Qt\Tools\mingw730_32\
C:\Qt\Tools\mingw730_64\
On the other hand, I have read that downloable Postgres version is compiled with MSVC, and I must to compile my own version. I have do it following link, and now I have a postgresql version in c:\pgsql
Finally I have added c:\pgsql to user Path
Next step, I have opened PowerShell in Admin mode and I´ve gone to C:\Qt\5.12.5\Src\.
Next, set the env path for this PowerShell session:
$env:Path += ";C:\Qt\Tools\mingw730_64\bin\;C:\Qt\5.12.5\Src;C:\pgsql\include\;C:\pgsql\lib\;C:\pgsql\bin\" (setting the pgsql path again....)
After that, I execute configure.bat like that:
configure -v -static -release -static-runtime -platform win32-g++ -prefix C:\Qt\5.12.5\Estatico\ -opensource -confirm-license -qt-zlib -qt-pcre -qt-libpng -qt-libjpeg -qt-freetype -opengl desktop -no-openssl -opensource -confirm-license -skip webengine -make libs -nomake tools -nomake examples -nomake tests -sql-psql
But I have get this error:
ERROR: Feature 'sql-psql' was enabled, but the pre-condition 'libs.psql' failed.
Searching in config.log I can read those lines:
loaded result for library config.qtbase_sqldrivers.libraries.psql
Trying source 0 (type pkgConfig) of library psql ...
pkg-config use disabled globally.
=> source produced no result.
Trying source 1 (type psqlConfig) of library psql ...
pg_config not found.
=> source produced no result.
Trying source 2 (type psqlEnv) of library psql ...
None of [liblibpq.dll.a liblibpq.a libpq.dll.a libpq.a libpq.lib] found in [] and global paths.
=> source produced no result.
Trying source 3 (type psqlEnv) of library psql ...
=> source failed condition '!config.win32'.
test config.qtbase_sqldrivers.libraries.psql FAILED
What can I do or what is the properly way to do that?
Thank you in advance.
UPDATE
There are similar question here but it hasn´t been solved, and those question ask about Visual Studio.
I want to compile it under mingw.
The solution suggested by #Soheil Armin doesn´t work too
The solution suggested by #Soheil Armin works fine, but I need to delete the entire source tree and reinstall it as he suggested. If not, a new configure won't work.
Also, the ^ character can be saved:
configure <your parameters>
PSQL_LIBS="C:\pgsql\lib\libpq.a"
-I "C:\pgsql\include"
-L "C:\pgsql\lib"
You need to explicitly define library paths of Postgres.
configure <your parameters> ^
PSQL_LIBS="C:\pgsql\lib\libpq.a" ^
-I "C:\pgsql\include" ^
-L "C:\pgsql\lib"

How can I get "HelloWorld - BitBake Style" working on a newer version of Yocto?

In the book "Embedded Linux Systems with the Yocto Project", Chapter 4 contains a sample called "HelloWorld - BitBake style". I encountered a bunch of problems trying to get the old example working against the "Sumo" release 2.5.
If you're like me, the first error you encountered following the book's instructions was that you copied across bitbake.conf and got:
ERROR: ParseError at /tmp/bbhello/conf/bitbake.conf:749: Could not include required file conf/abi_version.conf
And after copying over abi_version.conf as well, you kept finding more and more cross-connected files that needed to be moved, and then some relative-path errors after that... Is there a better way?
Here's a series of steps which can allow you to bitbake nano based on the book's instructions.
Unless otherwise specified, these samples and instructions are all based on the online copy of the book's code-samples. While convenient for copy-pasting, the online resource is not totally consistent with the printed copy, and contains at least one extra bug.
Initial workspace setup
This guide assumes that you're working with Yocto release 2.5 ("sumo"), installed into /tmp/poky, and that the build environment will go into /tmp/bbhello. If you don't the Poky tools+libraries already, the easiest way is to clone it with:
$ git clone -b sumo git://git.yoctoproject.org/poky.git /tmp/poky
Then you can initialize the workspace with:
$ source /tmp/poky/oe-init-build-env /tmp/bbhello/
If you start a new terminal window, you'll need to repeat the previous command which will get get your shell environment set up again, but it should not replace any of the files created inside the workspace from the first time.
Wiring up the defaults
The oe-init-build-env script should have just created these files for you:
bbhello/conf/local.conf
bbhello/conf/templateconf.cfg
bbhello/conf/bblayers.conf
Keep these, they supersede some of the book-instructions, meaning that you should not create or have the files:
bbhello/classes/base.bbclass
bbhello/conf/bitbake.conf
Similarly, do not overwrite bbhello/conf/bblayers.conf with the book's sample. Instead, edit it to add a single line pointing to your own meta-hello folder, ex:
BBLAYERS ?= " \
${TOPDIR}/meta-hello \
/tmp/poky/meta \
/tmp/poky/meta-poky \
/tmp/poky/meta-yocto-bsp \
"
Creating the layer and recipe
Go ahead and create the following files from the book-samples:
meta-hello/conf/layer.conf
meta-hello/recipes-editor/nano/nano.bb
We'll edit these files gradually as we hit errors.
Can't find recipe error
The error:
ERROR: BBFILE_PATTERN_hello not defined
It is caused by the book-website's bbhello/meta-hello/conf/layer.conf being internally inconsistent. It uses the collection-name "hello" but on the next two lines uses _test suffixes. Just change them to _hello to match:
# Set layer search pattern and priority
BBFILE_COLLECTIONS += "hello"
BBFILE_PATTERN_hello := "^${LAYERDIR}/"
BBFILE_PRIORITY_hello = "5"
Interestingly, this error is not present in the printed copy of the book.
No license error
The error:
ERROR: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb: This recipe does not have the LICENSE field set (nano)
ERROR: Failed to parse recipe: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
Can be fixed by adding a license setting with one of the values that bitbake recognizes. In this case, add a line onto nano.bb of:
LICENSE="GPLv3"
Recipe parse error
ERROR: ExpansionError during parsing /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
[...]
bb.data_smart.ExpansionError: Failure expanding variable PV_MAJOR, expression was ${#bb.data.getVar('PV',d,1).split('.')[0]} which triggered exception AttributeError: module 'bb.data' has no attribute 'getVar'
This is fixed by updating the special python commands being used in the recipe, because #bb.data was deprecated and is now removed. Instead, replace it with #d, ex:
PV_MAJOR = "${#d.getVar('PV',d,1).split('.')[0]}"
PV_MINOR = "${#d.getVar('PV',d,1).split('.')[1]}"
License checksum failure
ERROR: nano-2.2.6-r0 do_populate_lic: QA Issue: nano: Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]
This can be fixed by adding a directive to the recipe telling it what license-info-containing file to grab, and what checksum we expect it to have.
We can follow the way the recipe generates the SRC_URI, and modify it slightly to point at the COPYING file in the same web-directory. Add this line to nano.bb:
LIC_FILES_CHKSUM = "${SITE}/v${PV_MAJOR}.${PV_MINOR}/COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
The MD5 checksum in this case came from manually downloading and inspecting the matching file.
Done!
Now bitbake nano ought to work, and when it is complete you should see it built nano:
/tmp/bbhello $ find ./tmp/deploy/ -name "*nano*.rpm*"
./tmp/deploy/rpm/i586/nano-dbg-2.2.6-r0.i586.rpm
./tmp/deploy/rpm/i586/nano-dev-2.2.6-r0.i586.rpm
I have recently worked on that hands-on hello world project. As far as I am concerned, I think that the source code in the book contains some bugs. Below there is a list of suggested fixes:
Inheriting native class
In fact, when you build with bitbake that you got from poky, it builds only for the target, unless you mention in your recipe that you are building for the host machine (native). You can do the latter by adding this line at the end of your recipe:
inherit native
Adding license information
It is worth mentioning that the variable LICENSE is important to be set in any recipe, otherwise bitbake rises an error. In our case, we try to build the version 2.2.6 of the nano editor, its current license is GPLv3, hence it should be mentioned as follow:
LICENSE = "GPLv3"
Using os.system calls
As the book states, you cannot dereference metadata directly from a python function. Which means it is mandatory to access metadata through the d dictionary. Bellow, there is a suggestion for the do_unpack python function, you can use its concept to code the next tasks (do_configure, do_compile):
python do_unpack() {
workdir = d.getVar("WORKDIR", True)
dl_dir = d.getVar("DL_DIR", True)
p = d.getVar("P", True)
tarball_name = os.path.join(dl_dir, p+".tar.gz")
bb.plain("Unpacking tarball")
os.system("tar -x -C " + workdir + " -f " + tarball_name)
bb.plain("tarball unpacked successfully")
}
Launching the nano editor
After successfully building your nano editor package, you can find your nano executable in the following directory in case you are using Ubuntu (arch x86_64):
./tmp/work/x86_64-linux/nano/2.2.6-r0/src/nano
Should you have any comments or questions, Don't hesitate !

Automake, generated source files and VPATH builds

I'm doing VPATH builds with automake. I'm now also using generated source, with SWIG. I've got rules in Makefile.am like:
dist_noinst_DATA = whatever.swig
whatever.cpp: whatever.swig
swig -c++ -php $^
Then the file gets used later:
myprogram_SOURCES = ... whatever.cpp
It works fine when $builddir == $srcdir. But when doing VPATH builds (e.g. mkdir build; cd build; ../configure; make), I get error messages about missing whatever.cpp.
Should generated source files go to $builddir or $srcdir? (I reckon probably $builddir.)
How should dependencies and rules be specified to put generated files in the right place?
Simple answer
You should assume that $srcdir is a read-only, so you must not write anything there.
So, your generated source-code will end up in $(builddir).
By default, autotool-generated Makefiles will only look for source-files in $srcdir, so you have to tell it to check $builddir as well. Adding the following to your Makefile.am should help:
VPATH = $(srcdir) $(builddir)
After that you might end up with a no rule to make target ... error, which you should be able to fix by updating your source-generating rule as in:
$(builddir)/whatever.cpp: whatever.swig
# ...
A better solution
You might notice that in your current setup, the release tarball (as created by make dist) will contain the whatever.cpp file as part of your sources, since you added this file to the myprogram_SOURCES.
If you don't want this (e.g. because it might mean that the build-process will really take the pregenerated file rather than generating it again), you might want to use something like the following.
It uses a wrapper source-file (whatever_includer.cpp) that simply includes the generated file, and it uses -I$(builddir) to then find the generated file.
Makefile.am:
dist_noinst_DATA = whatever.swig
whatever.cpp: whatever.swig
swig -c++ -php $^
whatever_includer.cpp: whatever.cpp
myprogram_SOURCES = ... whatever_includer.cpp
myprogram_CPPFLAGS = ... -I$(builddir)
clean-local::
rm -f $(builddir)/whatever.cpp
whatever_includer.cpp:
#include "whatever.cpp"
Usually, you want to keep $srcdir readonly, so that if for instance the source is distributed unpacked on a CDROM, you can still run /.../configure from some other part of the file-system.
However if you are using SWIG to generate source code for a wrapper library, you probably want to distribute that SWIG-generated code as well so that your users do not need to install SWIG to compile your code. Then you have indeed a choice: you can decide that the SWIG-generated code should end in $builddir (it's OK: make dist will collect it there and include it in the tarball), or you could decide to output SWIG-generated code in $srcdir since it is really a source from the point of view of the distributed package. An advantage of keeping it in $srcdir is that when make distcheck attempts to build your package from a read-only source directory, it will fail on any attempt to call SWIG to regenerate the wrapper source. If you have your wrapper source in $builddir, you might not notice you have some broken rule that cause SWIG to be run on the user's host; by generating in $srcdir you ensure that SWIG is not needed by your users.
So my preference is to output SWIG wrapper sources in $srcdir. My setup for Python wrappers looks as follows:
EXTRA_DIST = spot.i
python_PYTHON = $(srcdir)/spot.py # _PYTHON is distributed by default
pyexec_LTLIBRARIES = _spot.la
MAINTAINERCLEANFILES = $(srcdir)/spot_wrap.cxx $(srcdir)/spot.py
_spot_la_SOURCES = $(srcdir)/spot_wrap.cxx $(srcdir)/spot_wrap.h
_spot_la_LDFLAGS = -avoid-version -module
_spot_la_LIBADD = $(top_builddir)/src/libspot.la
$(srcdir)/spot_wrap.cxx: $(srcdir)/spot.i
$(SWIG) -c++ -python -I$(srcdir) -I$(top_srcdir)/src $(srcdir)/spot.i
# Handle the multi-file output of SWIG.
$(srcdir)/spot.py: $(srcdir)/spot.i
$(MAKE) $(AM_MAKEFLAGS) spot_wrap.cxx
Note that I use $(srcdir) for all targets, because of limitations of the VPATH feature on various flavors of make. My setup to deal with the multiple files output by SWIG could be improved, but as these rules are not run by users and it has never caused me any problem, I do not bother.

wget files from FTP-like listings

So, site that used to use FTP now has an HTTP front-end and won't allow FTP connections. The site in question (for an example directory) will show a page with links to different dates. Inside each of these different dates, there are many files, and I typically just need to get some file with some clear pattern e.g. *h17v04*.hdf. I thought this could work:
wget -I "${PLATFORM}/${PRODUCT}/${YEAR}.*" -r -l 4 \
--user-agent="Mozilla/5.0 (Windows NT 5.2; rv:2.0.1) Gecko/20100101 Firefox/4.0.1" \
--verbose -c -np -nc -nd \
-A "*h17v04*.hdf" http://e4ftl01.cr.usgs.gov/$PLATFORM/$PRODUCT/
where PLATFORM=MOLT, PRODUCT=MOD09GA.005 and YEAR=2004, for example. This seems to start looking into all the useful dates, finds the index.html, and then just skips to the next directory, without downloading the relevant hdf file:
--2013-06-14 13:09:18-- http://e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/
Reusing existing connection to e4ftl01.cr.usgs.gov:80.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html'
[ <=> ] 174,182 134K/s in 1.3s
2013-06-14 13:09:20 (134 KB/s) - `e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html' saved [174182]
Removing e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.01/index.html since it should be rejected.
--2013-06-14 13:09:20-- http://e4ftl01.cr.usgs.gov/MOLT/MOD09GA.005/2004.01.02/
[...]
If I ignore the -A option, only the index.html file is downloaded to my system, but it appears it's not parsed and the links are not followed. I don't really know what more is required to make this work, as I can't see why it doesn't!!!
SOLUTION
In the end, the problem was due to an old bug in the local version of wget. However, I ended up writing my own script for downloading MODIS data from the server above. The script is pure Python, and is available from here.
Consider to use pyModis instead of wget which is a Free and Open Source Python based library to work with MODIS data. It offers bulk-download for user selected time ranges, mosaicking of MODIS tiles, and the reprojection from Sinusoidal to other projections, convert HDF format to other formats. See
http://www.pymodis.org/

Algebra filter error in moodle

I installed moodle 1.9.12 and now I want to use Algebra notation in content. I enable "TeX Notation" and "Algebra Notation" in administrator panel and also install mimetext and dvips and Imagemagic on the server. fortunately Tex Notation works fine but I got the following error for Algebra:
sh: /var/www/html/moodle/filter/tex/mimetex.linux: not found
The shell command
"/var/www/html/moodle/filter/tex/mimetex.linux" -e "/var/www/moodledata/filter/algebra/de06d6c44d98ba4e42dffca988bf530b.gif" -- '\Large \frac{\sin\left(z\right)}{x^{2}+y^{2}}'
returned status = 127
File size of mimetex executable /var/www/html/moodle/filter/tex/mimetex.linux is 830675
The file permissions are: 100775
The md5 checksum of the file is 56bcc40de905ce92ebd7b083c76e019e
Image not found!
Note: /var/www/html/moodle/filter/tex/mimetex.linux exists on the server and is executable!!!
What is the problem?? Any idea?????
From what you have described, calling the general tex filter debug page works and does not show up the same error.
/filter/tex/texdebug.php works, but /filter/algebra/algebradebug.php does not.
If this is the case, perhaps you could check for an open_basedir, or safe_mode_exec_dir being set to include the current working directory, or otherwise restricting the execution of /var/www/html/moodle/filter/tex/mimetex.linux, while the current working directory is /var/www/html/moodle/filter/algebra.
You could look at this by visiting /admin/phpinfo.php at your site, and look carefully at the effective values of open_basedir, safe_mode and safe_mode_exec_dir.
You could also check the apache error log or add the following lines to the top of the algebra debug php file, and you might see some extra error messages:
$CFG->debug = 6143 ;
$CFG->debugdisplay= 1 ;
Hope that helps