RTC ignore specific lines in source - version-control

I have a project that generates JavaScript files. I'll like to add date and time for when these files where generated. But i want RTC to ignore these lines.
How do i do that?

But i want RTC to ignore these lines.
This is not possible with Jazz RTC.
And you don't add versioning metadata (like who modifies what) in a data (the file).
What you want to see is done with RTC annotate.
But if you generate a file with those information, then you could add that generated file to a .jazzignore, in order to make sure RTC doesn't try to check it in.
You can use lscm annotate in order to generate such a file.
See this example:
$ lscm annotate flux.capacitor/requirements.txt
1 Marty (8556) 2009-11-04 02:47 PM - Must not need any more than 1.5 gigawatts of power
2 Marty (8556) 2009-11-04 02:47 PM - Determine minimum necessary speed
3 Doc (8616) 2009-11-04 02:47 PM Results of initial t - Initial trials suggest 60kmh
You can redirect the output of that command to a file:
$ lscm annotate flux.capacitor/requirements.txt > flux.capacitor/requirements.annotated

Related

"failed to load any lstm-specific dictionaries for lang " tesseract 4.1

I tried to train the tesseract 4.1 using OCRD project but after training completed I copied the lang.traineddata but getting above error.
The tesseractWiki page is very confusing to understand asking to use combine_lang_model after making lstmf file. So Actually I have the lstmf file. I created these file by using tif/box pair.
Please help me for further step.
Related discussions:Failed to load any lstm-specific dictionaries for lang xxx
Suppose your training folder like this:
OCRD/makefile
OCRD/data/foo-ground-truth.
You could try as following steps:
Find the WORDLIST_FILE/NUMBERS_FILE/PUNC_FILE in the makefile, and change them to:
WORDLIST_FILE := data/$(MODEL_NAME).wordlist
NUMBERS_FILE := data/$(MODEL_NAME).numbers
PUNC_FILE := data/$(MODEL_NAME).punc
Suppose your base traineddata is eng.traineddata.
2.1 Download the .wordlist/.numbers/.punc files from the langdata_lstm.
2.2 Place them in OCRD/data
2.3 if the MODEL_NAME = foo, rename them as: foo.wordlist, foo.numbers, foo.punc
if you don't have the base traineddata, you could try this too. But if your base traineddata is afr, you should download the files from langdata_lstm/afr.
make training again
The cause of this error:
In OCRD, the default path of the above three files is $ (OUTPUT_DIR) = data / $ (MODEL_NAME), and all files in this path are automatically generated during the training process.
If the variable START_MODEL is not assigned, the makefile will not generate any related files under this path;
If the variable START_MODEL has been assigned, the foo.lstm-number-dawg、foo.lstm-punc-dawg、foo.lstm-word-dawg and so on will be produced in data / $ (MODEL_NAME). But they are not the right one. So there may be a bug in OCRD.

How can I get "HelloWorld - BitBake Style" working on a newer version of Yocto?

In the book "Embedded Linux Systems with the Yocto Project", Chapter 4 contains a sample called "HelloWorld - BitBake style". I encountered a bunch of problems trying to get the old example working against the "Sumo" release 2.5.
If you're like me, the first error you encountered following the book's instructions was that you copied across bitbake.conf and got:
ERROR: ParseError at /tmp/bbhello/conf/bitbake.conf:749: Could not include required file conf/abi_version.conf
And after copying over abi_version.conf as well, you kept finding more and more cross-connected files that needed to be moved, and then some relative-path errors after that... Is there a better way?
Here's a series of steps which can allow you to bitbake nano based on the book's instructions.
Unless otherwise specified, these samples and instructions are all based on the online copy of the book's code-samples. While convenient for copy-pasting, the online resource is not totally consistent with the printed copy, and contains at least one extra bug.
Initial workspace setup
This guide assumes that you're working with Yocto release 2.5 ("sumo"), installed into /tmp/poky, and that the build environment will go into /tmp/bbhello. If you don't the Poky tools+libraries already, the easiest way is to clone it with:
$ git clone -b sumo git://git.yoctoproject.org/poky.git /tmp/poky
Then you can initialize the workspace with:
$ source /tmp/poky/oe-init-build-env /tmp/bbhello/
If you start a new terminal window, you'll need to repeat the previous command which will get get your shell environment set up again, but it should not replace any of the files created inside the workspace from the first time.
Wiring up the defaults
The oe-init-build-env script should have just created these files for you:
bbhello/conf/local.conf
bbhello/conf/templateconf.cfg
bbhello/conf/bblayers.conf
Keep these, they supersede some of the book-instructions, meaning that you should not create or have the files:
bbhello/classes/base.bbclass
bbhello/conf/bitbake.conf
Similarly, do not overwrite bbhello/conf/bblayers.conf with the book's sample. Instead, edit it to add a single line pointing to your own meta-hello folder, ex:
BBLAYERS ?= " \
${TOPDIR}/meta-hello \
/tmp/poky/meta \
/tmp/poky/meta-poky \
/tmp/poky/meta-yocto-bsp \
"
Creating the layer and recipe
Go ahead and create the following files from the book-samples:
meta-hello/conf/layer.conf
meta-hello/recipes-editor/nano/nano.bb
We'll edit these files gradually as we hit errors.
Can't find recipe error
The error:
ERROR: BBFILE_PATTERN_hello not defined
It is caused by the book-website's bbhello/meta-hello/conf/layer.conf being internally inconsistent. It uses the collection-name "hello" but on the next two lines uses _test suffixes. Just change them to _hello to match:
# Set layer search pattern and priority
BBFILE_COLLECTIONS += "hello"
BBFILE_PATTERN_hello := "^${LAYERDIR}/"
BBFILE_PRIORITY_hello = "5"
Interestingly, this error is not present in the printed copy of the book.
No license error
The error:
ERROR: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb: This recipe does not have the LICENSE field set (nano)
ERROR: Failed to parse recipe: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
Can be fixed by adding a license setting with one of the values that bitbake recognizes. In this case, add a line onto nano.bb of:
LICENSE="GPLv3"
Recipe parse error
ERROR: ExpansionError during parsing /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
[...]
bb.data_smart.ExpansionError: Failure expanding variable PV_MAJOR, expression was ${#bb.data.getVar('PV',d,1).split('.')[0]} which triggered exception AttributeError: module 'bb.data' has no attribute 'getVar'
This is fixed by updating the special python commands being used in the recipe, because #bb.data was deprecated and is now removed. Instead, replace it with #d, ex:
PV_MAJOR = "${#d.getVar('PV',d,1).split('.')[0]}"
PV_MINOR = "${#d.getVar('PV',d,1).split('.')[1]}"
License checksum failure
ERROR: nano-2.2.6-r0 do_populate_lic: QA Issue: nano: Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]
This can be fixed by adding a directive to the recipe telling it what license-info-containing file to grab, and what checksum we expect it to have.
We can follow the way the recipe generates the SRC_URI, and modify it slightly to point at the COPYING file in the same web-directory. Add this line to nano.bb:
LIC_FILES_CHKSUM = "${SITE}/v${PV_MAJOR}.${PV_MINOR}/COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
The MD5 checksum in this case came from manually downloading and inspecting the matching file.
Done!
Now bitbake nano ought to work, and when it is complete you should see it built nano:
/tmp/bbhello $ find ./tmp/deploy/ -name "*nano*.rpm*"
./tmp/deploy/rpm/i586/nano-dbg-2.2.6-r0.i586.rpm
./tmp/deploy/rpm/i586/nano-dev-2.2.6-r0.i586.rpm
I have recently worked on that hands-on hello world project. As far as I am concerned, I think that the source code in the book contains some bugs. Below there is a list of suggested fixes:
Inheriting native class
In fact, when you build with bitbake that you got from poky, it builds only for the target, unless you mention in your recipe that you are building for the host machine (native). You can do the latter by adding this line at the end of your recipe:
inherit native
Adding license information
It is worth mentioning that the variable LICENSE is important to be set in any recipe, otherwise bitbake rises an error. In our case, we try to build the version 2.2.6 of the nano editor, its current license is GPLv3, hence it should be mentioned as follow:
LICENSE = "GPLv3"
Using os.system calls
As the book states, you cannot dereference metadata directly from a python function. Which means it is mandatory to access metadata through the d dictionary. Bellow, there is a suggestion for the do_unpack python function, you can use its concept to code the next tasks (do_configure, do_compile):
python do_unpack() {
workdir = d.getVar("WORKDIR", True)
dl_dir = d.getVar("DL_DIR", True)
p = d.getVar("P", True)
tarball_name = os.path.join(dl_dir, p+".tar.gz")
bb.plain("Unpacking tarball")
os.system("tar -x -C " + workdir + " -f " + tarball_name)
bb.plain("tarball unpacked successfully")
}
Launching the nano editor
After successfully building your nano editor package, you can find your nano executable in the following directory in case you are using Ubuntu (arch x86_64):
./tmp/work/x86_64-linux/nano/2.2.6-r0/src/nano
Should you have any comments or questions, Don't hesitate !

OpenCobol & PostgreSQL on Windows with Visual Studio

I'm currently facing a problem with this team of 4.
Using binaries I downloaded on kiska's site. I'm able to compile cobol to C and run it with cobcrun or compile it to an executable. However I can 't get opencobol to find the postgres commands.
Here is the strat of my cobol script :
identification division.
program-id. pgcob.
data division.
working-storage section.
01 pgconn usage pointer.
01 pgres usage pointer.
01 resptr usage pointer.
01 resstr pic x(80) based.
01 result usage binary-long.
01 answer pic x(80).
procedure division.
display "Before connect:" pgconn end-display
call "PQconnectdb" using
by reference "dbname = postgres" & x"00"
by reference "host = 10.37.180.146" & "00"
returning pgconn
end-call
...
the call PQconnectdb fail with module ont found : PQconnectdb
I noticed that if i rename the libpq.dll the error message change to can't find entry point. So at least I'm sure it can get my dll.
After digging into the code of the call method of the libcob library. I found it it was possible to pre load some dll using an environment variable COB_PRE_LOAD but sitll no results.
Here is what look the script to compile the cobol :
call "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\amd64\vcvarsamd64.bat"
set COB_CONFIG_DIR=C:\OpenCobol\config
set COB_COPY_DIR=C:\OpenCobol\Copy
set COB_LIBS=%COB_LIBS% c:\OpenCobol\libpq.lib
set COB_LIBRARY_PATH=C:\OpenCobol\bin
set COB_PRE_LOAD=C:\OpenCobol\libpq.dll
#echo on
cobc -info
cobc -free -o pgcob -L:"C:\OpenCobol" -llibpq.lib test_cobol\postgres.cob
call cobcrun pgcob
I don't see anything missing, I'm using the 64-bit binaries from kiska's site and use the 64-bit cl.exe from Visual Studio, Postgres is a 64 bit version too (checked with dependencyChecker).
I even tryed to compile the generated C from Visual Studio, same result, but I may miss something, I'm pretty rotten in C and never really had to manage DLL or use Visual Studio.
What am I missing ?
COB_PRE_LOAD doesn't take any path or extension, see the short documentation for the available runtime configurations. I guess
set COB_LIBRARY_PATH=C:\OpenCobol\bin;C:\OpenCobol
set COB_PRE_LOAD=libpq
Will work. You can omit the C:\OpenCobol\bin if you did not placed any additional executables there.
If it doesn't work (even if it does) I'd try to get the C functions resolved at compile time. Either use
CALL STATIC "PQconnectdb" using ...
or an appropriate CALL-CONVENTION or leave the program as-is and use
cobc -free -o pgcob -L"C:\OpenCobol" -llibpq -K PQconnectdb test_cobol\postgres.cob
From cobc --help:
-K generate CALL to <entry> as static
In general: the binaries from kiska.net are quite outdated. I highly suggest getting newer ones from the official download site or ideally build them on your own from source, see the documentation for building GnuCOBOL with VisualStudio.

CMakeLists.txt for Eclipse and ROS

I have been doing a project that has many classes (including cpp and header files) and one executable cpp that has int main. With ROS, I'm trying to link these with CMakeLists.txt and with the runtime, I'm planning to compile it without having to change the txt every time. Here is my CMakeLists.txt:
cmake_minimum_required(VERSION 2.4.6)
include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake)
rosbuild_init()
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)
rosbuild_add_library(${PROJECT_NAME} Im_Basibos.cpp)
rosbuild_add_library(${PROJECT_NAME} Im_HedefeGitme.cpp)
rosbuild_add_library(${PROJECT_NAME} Im_Konum.cpp)
rosbuild_add_library(${PROJECT_NAME} Im_Robot.cpp)
rosbuild_add_library(${PROJECT_NAME} Im_Sonar.cpp)
rosbuild_add_executable(srctest Im_RobotKontrol.cpp)
I dont know how to link the header files, I have to link these:
Im_Basibos.h, Im_Basibos.cpp
Im_HedefeGitme.h, Im_HedefeGitme.cpp
Im_Konum.h, Im_Konum.cpp
Im_Robot.h, Im_Robot.cpp
Im_Sonar.h, Im_Sonar.cpp
and
Im_Robot.cpp that has int main()
Any answer will be much appreciated. Thanks already..
I guess rosbuild_add_library works the same than add_library and is not meant to works the way you're using it. It's meant to create static or shared libraries, not to build object files.
I'm giving you two possible ways to build your executable.
version 1
If you only need to build an executable srctest and no separate library.
What you need to do is to list your source files in some variables, say srctest_SOURCES:
set(srctest_SOURCES Im_Basibos.cpp Im_HedefeGitme.cpp
Im_Konum.cpp Im_Robot.cpp Im_Sonar.cpp
Im_RobotKontrol.cpp)
Then build those sources into an executable:
add_executable(srctest ${srctest_SOURCES})
version 2
Now, if you really want to first build a library, say testlib then link it to your srctest executable, that can be done too:
set(testlib_SOURCES Im_Basibos.cpp Im_HedefeGitme.cpp
Im_Konum.cpp Im_Robot.cpp Im_Sonar.cpp)
add_library(testlib ${srctest_SOURCES})
add_executable(srctest Im_RobotKontrol.cpp)
target_link_libraries(srctest testlib)
Thanks to Guillaume for the methods,
Since I'm working with ROS environment,
the exact commands that did the trick were:
rosbuild_add_library(${PROJECT_NAME} Im_Basibos.cpp)
rosbuild_add_library(${PROJECT_NAME} Im_HedefeGitme.cpp)
rosbuild_add_library(${PROJECT_NAME} Im_Konum.cpp)
rosbuild_add_library(${PROJECT_NAME} Im_Robot.cpp)
rosbuild_add_library(${PROJECT_NAME} Im_Sonar.cpp)
rosbuild_add_executable(srctest Im_RobotKontrol.cpp)
target_link_libraries(srctest ${PROJECT_NAME})

Is there a way to tell django compressor to create source maps

I want to be able to debug minified compressed javascript code on my production site. Our site uses django compressor to create minified and compressed js files. I read recently about chrome being able to use source maps to help debug such javascript. However I don't know how/if possible to tell the django compressor to create source maps when compressing the js files
I don't have a good answer regarding outputting separate source map files, however I was able to get inline working.
Prior to adding source maps my settings.py file used the following precompilers
COMPRESS_PRECOMPILERS = (
('text/coffeescript', 'coffee --compile --stdio'),
('text/less', 'lessc {infile} {outfile}'),
('text/x-sass', 'sass {infile} {outfile}'),
('text/x-scss', 'sass --scss {infile} {outfile}'),
('text/stylus', 'stylus < {infile} > {outfile}'),
)
After a quick
$ lessc --help
You find out you can put the less and map files in to the output css file. So my new text/less precompiler entry looks like
('text/less', 'lessc --source-map-less-inline --source-map-map-inline {infile} {outfile}'),
Hope this helps.
Edit: Forgot to add, lessc >= 1.5.0 required for this, to upgrade use
$ [sudo] npm update -g less
While I couldn't get this to work with django-compressor (though it should be possible, I think I just had issues getting the app set up correctly), I was able to get it working with django-assets.
You'll need to add the appropriate command-line argument to the less filter source code as follows:
diff --git a/src/webassets/filter/less.py b/src/webassets/filter/less.py
index eb40658..a75f191 100644
--- a/src/webassets/filter/less.py
+++ b/src/webassets/filter/less.py
## -80,4 +80,4 ## class Less(ExternalTool):
def input(self, in_, out, source_path, **kw):
# Set working directory to the source file so that includes are found
with working_directory(filename=source_path):
- self.subprocess([self.less or 'lessc', '-'], out, in_)
+ self.subprocess([self.less or 'lessc', '--line-numbers=mediaquery', '-'], out, in_)
Aside from that tiny addition:
make sure you've got the node -- not the ruby gem -- less compiler (>=1.3.2 IIRC) available in your path.
turn on the sass source-maps option buried away in chrome's web inspector config pages. (yes, 'sass' not less: less tweaked their debug-info format to match sass's since since sass had already implemented a chrome-compatible mapping and their formats weren't that different to begin with anyway...)
Not out of the box but you can extend a custom filter:
from compressor.filters import CompilerFilter
class UglifyJSFilter(CompilerFilter):
command = "uglifyjs -c -m " /
"--source-map-root={relroot}/ " /
"--source-map-url={name}.map.js" /
"--source-map={relpath}/{name}.map.js -o {output}"