Protobufs import from another directory - import

While trying to compile a proto file named UserOptions.proto which has an import named Account.proto using the below command
protoc --proto_path=/home/project_new1/account --java_out=/home/project_new1/source /home/project_new1/settings/Useroptions.proto
I get the following error :
/home/project_new1/settings/UserOpti‌​ons.proto: File does not reside within any path specified using --proto_path (or -I). You must specify a --proto_path which encompasses this file.
PS: UserOptions.proto present in the directory /home/project_new1/settings
imports Account.proto present in the directory
/home/project_new1/account
Proto descriptor files:
UserOptions.proto
package settings;
import "Account.proto";
option java_outer_classname = "UserOptionsVOProto";
Account.proto
package account;
option java_outer_classname = "AccountVOProto";
message Object
{
optional string userId = 1;
optional string service = 2;
}

As the error message states, the file you pass on the command line needs to be in one of the --proto_paths. In your case, you have only specified one --proto_path of:
/home/project_new1/
But the file you're passing is:
/home/project_new1/settings/UserOpti‌ons.proto
Notice that the file is not in the account subdirectory; it's in settings instead.
You have two options:
(Not recommended) Pass a second --proto_path argument to add .../settings to the path.
(Recommended) Use the root of your source tree as the proto path. E.g.:
protoc --proto_path=/home/project_new1/ --java_out=/home/project_new1 /home/project_new1/settings/UserOpti‌ons.proto
In this case, to import Account.proto, you'll need to write:
import "acco‌​unt/Account.proto";

For those of us who want this really spelled out, here is an example where I have installed the protoc beta for gRPC using NuGet Packages Google.Protobuf, Grpc.Core and Grpc.Tools. My solution packages are one level above my Grpc directory (i.e. at BruTrader\packages). My .proto files are at BruTrader\Grpc\protos.
1. My .proto file:
syntax = "proto3";
import "timestamp.proto";
import "enums.proto";
package BruTrader.Grpc;
message DividendMessage {
double amount = 1;
google.protobuf.Timestamp dateUnix = 2;
}
2. my GenerateProto.bat file:
..\packages\Google.Protobuf.3.0.0-beta2\tools\protoc.exe -I..\Grpc\protos -I..\packages\Google.Protobuf.3.0.0-beta2\tools\google\protobuf --csharp_out=..\Grpc\Generated --grpc_out=..\Grpc\Generated --plugin=protoc-gen-grpc=..\packages\Grpc.Tools.0.13.0\tools\grpc_csharp_plugin.exe %1
3. my BuildProtos.bat
call GenerateProto ..\Grpc\protos\masterinstrument.proto
call GenerateProto .\protos\instrument.proto
etc.
4. BuildProtos.bat is executed as a Pre-build event on my Grpc project like this:
CD $(ProjectDir)
CALL "$(ProjectDir)BuildProtos.bat"

For my environment, Windows 10 Pro operating system and C++ programming languaje, I used the protoc-3.12.2-win64.zip that you can downloat it from here. You should open a Windows PowerShell inside the protoc-3.12.2-win64\bin path and then you must execute one of the next commands:
.\protoc.exe -I=C:\Users\UserName\Desktop\SRC --cpp_out=C:\Users\UserName\Desktop\DST C:\Users\UserName\Desktop\SRC\addressbook.proto
Or
.\protoc.exe --proto_path=C:\Users\UserName\Desktop\SRC --cpp_out=C:\Users\UserName\Desktop\DST C:\Users\UserName\Desktop\SRC\addressbook.proto
Note:
1- My source folder is in: C:\Users\UserName\Desktop\SRC
2- My destination folder is in: C:\Users\UserName\Desktop\DST
3- My .proto file is in: C:\Users\UserName\Desktop\SRC\addressbook.proto

Related

How to copy whole directories containing subdirectories to /boot (i.e bootfs) in Yocto while inheriting core-image class?

I have a directory which again contains subdirectories, which are built has part of other recipe and moved to DEPLOY_DIR_IMAGE using deploy bb class. So now I want to copy it to main image boot partition.
If it was a single file then appending required filename to IMAGE_EFI_BOOT_FILES variable, then yocto copies it to /boot. But same doesn't work for directories containing subdirectories please provide style to include even the subdirectories. Thank you
PS: I have tried appending IMAGE_EFI_BOOT_FILES += "parent_dir/*" didnt work.
It is obvious that IMAGE_EFI_BOOT_FILES is acting like the well known IMAGE_BOOT_FILES and other variables that are responsible for having the files necessary to be shipped in the boot partition. And that needs files and not directories.
So, if you do not need to specify all the files by hand, but instead you want to pass the directory, I suggest you use a python method to collect the files for you and append them to the variable.
See the following example I developed and tested:
def get_files(d, dir):
import os
dir_path = dir
if not os.path.exists(os.path.dirname(dir)):
dir_path = d.getVar(dir)
return ' '.join(f for f in os.listdir(d.getVar(dir)) if os.path.isfile(f))
IMAGE_EFI_BOOT_FILES += "${#get_files(d, 'DEPLOY_DIR_IMAGE')}"
The method will test if the argument is a real path then it will directly check for files, if not it will assume that it is a bitbake variable and it will get its content, so if DEPLOY_DIR_IMAGE is, for example, /home/user/dir, passing DEPLOY_DIR_IMAGE or /home/usr/dir will give the same result.
IMPORTANT
It is obvious also that IMAGE_EFI_BOOT_FILES is used in a .conf file such as local.conf or a custom machine configuration file. So adding that python function in .conf file will not work. I suggest creating a class for it and inherit it globally in your .conf file:
meta-custom/classes/utils.bbclass
local.conf:
INHERIT += "utils"
IMAGE_EFI_BOOT_FILES += "${#get_files(d, 'DEPLOY_DIR_IMAGE')}"
Try this and let me know in the comments.
EDIT
I have just realized that bitbake already imports os within python expressions expansions, so you can do it in one line without any need for a separate python function:
PATH = "/path/to/directory/" or
PATH = "A variable containing the path"
IMAGE_EFI_BOOT_FILES += "${#' '.join('%s' % f for f in os.listdir('${PATH}') if os.path.isfile(f))}"
Note: I am looking for Yocto built-in which can achieve solution for above mentioned , would like to share other way to resolve the functionality for community's benefit.
Add following in bb file if you are using one or refer to talel-belhadjsalem answer to use utils.bbclass.
def add_directory_bootfs(d, dirname, bootfs_dir):
file_list = list()
boot_files_list = list()
deploy_dir_image = d.getVar('DEPLOY_DIR_IMAGE')
for (dirpath, dirnames, filenames) in os.walk(os.path.join(deploy_dir_image, dirname)):
file_list += [os.path.join(dirpath, file) for file in filenames]
for file in file_list:
file_rel_path = os.path.relpath(file, os.path.join(deploy_dir_image, dirname))
boot_file_entry = os.path.join(dirname, file_rel_path) + ';' + os.path.join(bootfs_dir, dirname, file_rel_path)
boot_files_list.append(boot_file_entry)
return ' '.join(boot_files_list)
IMAGE_EFI_BOOT_FILES += "${#add_directory_bootfs(d, 'relative_path_to_dir_in_deploy_dir_image', 'EFI/BOOT')}"

How can I get "HelloWorld - BitBake Style" working on a newer version of Yocto?

In the book "Embedded Linux Systems with the Yocto Project", Chapter 4 contains a sample called "HelloWorld - BitBake style". I encountered a bunch of problems trying to get the old example working against the "Sumo" release 2.5.
If you're like me, the first error you encountered following the book's instructions was that you copied across bitbake.conf and got:
ERROR: ParseError at /tmp/bbhello/conf/bitbake.conf:749: Could not include required file conf/abi_version.conf
And after copying over abi_version.conf as well, you kept finding more and more cross-connected files that needed to be moved, and then some relative-path errors after that... Is there a better way?
Here's a series of steps which can allow you to bitbake nano based on the book's instructions.
Unless otherwise specified, these samples and instructions are all based on the online copy of the book's code-samples. While convenient for copy-pasting, the online resource is not totally consistent with the printed copy, and contains at least one extra bug.
Initial workspace setup
This guide assumes that you're working with Yocto release 2.5 ("sumo"), installed into /tmp/poky, and that the build environment will go into /tmp/bbhello. If you don't the Poky tools+libraries already, the easiest way is to clone it with:
$ git clone -b sumo git://git.yoctoproject.org/poky.git /tmp/poky
Then you can initialize the workspace with:
$ source /tmp/poky/oe-init-build-env /tmp/bbhello/
If you start a new terminal window, you'll need to repeat the previous command which will get get your shell environment set up again, but it should not replace any of the files created inside the workspace from the first time.
Wiring up the defaults
The oe-init-build-env script should have just created these files for you:
bbhello/conf/local.conf
bbhello/conf/templateconf.cfg
bbhello/conf/bblayers.conf
Keep these, they supersede some of the book-instructions, meaning that you should not create or have the files:
bbhello/classes/base.bbclass
bbhello/conf/bitbake.conf
Similarly, do not overwrite bbhello/conf/bblayers.conf with the book's sample. Instead, edit it to add a single line pointing to your own meta-hello folder, ex:
BBLAYERS ?= " \
${TOPDIR}/meta-hello \
/tmp/poky/meta \
/tmp/poky/meta-poky \
/tmp/poky/meta-yocto-bsp \
"
Creating the layer and recipe
Go ahead and create the following files from the book-samples:
meta-hello/conf/layer.conf
meta-hello/recipes-editor/nano/nano.bb
We'll edit these files gradually as we hit errors.
Can't find recipe error
The error:
ERROR: BBFILE_PATTERN_hello not defined
It is caused by the book-website's bbhello/meta-hello/conf/layer.conf being internally inconsistent. It uses the collection-name "hello" but on the next two lines uses _test suffixes. Just change them to _hello to match:
# Set layer search pattern and priority
BBFILE_COLLECTIONS += "hello"
BBFILE_PATTERN_hello := "^${LAYERDIR}/"
BBFILE_PRIORITY_hello = "5"
Interestingly, this error is not present in the printed copy of the book.
No license error
The error:
ERROR: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb: This recipe does not have the LICENSE field set (nano)
ERROR: Failed to parse recipe: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
Can be fixed by adding a license setting with one of the values that bitbake recognizes. In this case, add a line onto nano.bb of:
LICENSE="GPLv3"
Recipe parse error
ERROR: ExpansionError during parsing /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
[...]
bb.data_smart.ExpansionError: Failure expanding variable PV_MAJOR, expression was ${#bb.data.getVar('PV',d,1).split('.')[0]} which triggered exception AttributeError: module 'bb.data' has no attribute 'getVar'
This is fixed by updating the special python commands being used in the recipe, because #bb.data was deprecated and is now removed. Instead, replace it with #d, ex:
PV_MAJOR = "${#d.getVar('PV',d,1).split('.')[0]}"
PV_MINOR = "${#d.getVar('PV',d,1).split('.')[1]}"
License checksum failure
ERROR: nano-2.2.6-r0 do_populate_lic: QA Issue: nano: Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]
This can be fixed by adding a directive to the recipe telling it what license-info-containing file to grab, and what checksum we expect it to have.
We can follow the way the recipe generates the SRC_URI, and modify it slightly to point at the COPYING file in the same web-directory. Add this line to nano.bb:
LIC_FILES_CHKSUM = "${SITE}/v${PV_MAJOR}.${PV_MINOR}/COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
The MD5 checksum in this case came from manually downloading and inspecting the matching file.
Done!
Now bitbake nano ought to work, and when it is complete you should see it built nano:
/tmp/bbhello $ find ./tmp/deploy/ -name "*nano*.rpm*"
./tmp/deploy/rpm/i586/nano-dbg-2.2.6-r0.i586.rpm
./tmp/deploy/rpm/i586/nano-dev-2.2.6-r0.i586.rpm
I have recently worked on that hands-on hello world project. As far as I am concerned, I think that the source code in the book contains some bugs. Below there is a list of suggested fixes:
Inheriting native class
In fact, when you build with bitbake that you got from poky, it builds only for the target, unless you mention in your recipe that you are building for the host machine (native). You can do the latter by adding this line at the end of your recipe:
inherit native
Adding license information
It is worth mentioning that the variable LICENSE is important to be set in any recipe, otherwise bitbake rises an error. In our case, we try to build the version 2.2.6 of the nano editor, its current license is GPLv3, hence it should be mentioned as follow:
LICENSE = "GPLv3"
Using os.system calls
As the book states, you cannot dereference metadata directly from a python function. Which means it is mandatory to access metadata through the d dictionary. Bellow, there is a suggestion for the do_unpack python function, you can use its concept to code the next tasks (do_configure, do_compile):
python do_unpack() {
workdir = d.getVar("WORKDIR", True)
dl_dir = d.getVar("DL_DIR", True)
p = d.getVar("P", True)
tarball_name = os.path.join(dl_dir, p+".tar.gz")
bb.plain("Unpacking tarball")
os.system("tar -x -C " + workdir + " -f " + tarball_name)
bb.plain("tarball unpacked successfully")
}
Launching the nano editor
After successfully building your nano editor package, you can find your nano executable in the following directory in case you are using Ubuntu (arch x86_64):
./tmp/work/x86_64-linux/nano/2.2.6-r0/src/nano
Should you have any comments or questions, Don't hesitate !

Postgres PL/JAVA: java.lang.ClassNotFoundException error after loading JAR file in database

I am getting the java.lang.ClassNotFoundException: error inside Postgres when running a function that calls a JAR file I have loaded. I have installed and configured PL/JAVA (including the delivered examples) in my database and can run the examples to success. I am not attempting to load/install my first JAR, but I am doing something wrong.
My host controls the OS version: CentOS 6.8. Postgres is version 8.4.
I am attempting to install my own very simple java class, which is a derivative of the delivered example Parameters.addOne class. All my code is in /tmp. Here are the steps I've followed:
Doug.java:
package com.msmetric;
import java.math.BigDecimal;
import java.sql.Date;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Time;
import java.sql.Timestamp;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.TimeZone;
import java.util.logging.Logger;
public class Doug {
public static int addOne(int value) {
return value + 1;
}
}
Compile Doug.java using 'javac Doug.java' succeeds.
Create JAR file with Doug.class file in it using 'jar -cvf Doug.jar Doug.class. This works fine.
Now I load the JAR file into Postgres (public schema), change the classpath, create the function that calls the JAR, then attempt to run at psql prompt.
Run sqlj.install_jar from psql:
select sqlj.install_jar('file:/tmp/Doug.jar','Doug',false);
Set the classpath inside Postgres (from psql prompt postgres=#):
select sqlj.set_classpath('public','Doug');
Create the function that calls the JAR. This create function code is taken directly from the examples.ddr file that came with PL/JAVA. I simply changed org.postgres to com.msmetric.
create or replace function addone(int) returns int as 'com.msmetric.Doug.addOne(java.lang.Integer)' language java;
Now with the JAR loaded and function created, I attempt to run it. This function should simply add 1 to the number provided.
select addone(3);
Results:
ERROR: java.lang.ClassNotFoundException: com.msmetric.Doug
Thoughts?
I'm very sorry I didn't see your question sooner. Underneath all the exotic details (PostgreSQL, PL/Java, schemas, classpaths...), there's just a bit of basic Java going on here: if a jar file contains a class Doug.class in package com.msmetric, its path within the jar has to reflect that: it has to be com/msmetric/Doug.class. Otherwise, it won't be found.
You can set up that whole structure step by step:
javac Doug.java
mkdir com
mkdir com/msmetric
mv Doug.class com/msmetric/
jar -cvf Doug.jar com/msmetric/Doug.class
Or, you can let javac do more of the work for you:
mkdir classes
javac -d classes Doug.java
jar -cvf Doug.jar -C classes .
When you give javac a -ddirectory option, instead of just writing class files next to their .java sources, it will put them all in their proper places under the directory you named, and then you can just tell jar to change into that directory and slurp them all up (don't overlook the . at the end of that jar command).
Once you fix that, if you retry your original steps, you'll see that you now get a different error:
ERROR: Unable to find static method com.msmetric.Doug.addOne with signature (Ljava/lang/Integer;)I
That happens because you declared the function in Doug.java with int addOne(int value) (that is, taking a primitive int argument), but you declared it in SQL with returns int as 'com.msmetric.Doug.addOne(java.lang.Integer)' taking an Integer object.
Once you correct that:
create or replace function addone(int) returns int as 'com.msmetric.Doug.addOne(int)' language java;
you'll be able to see:
# select addone(3);
addone
--------
4
(1 row)
If you happen to see this belated answer, may I ask what version of PL/Java you are using? That's one detail you didn't mention. If it is older than 1.5.0, there are newer features that can help you out. For one, you can just annotate that function:
#Function
public static int addOne(int value) {
return value + 1;
}
and have javac spit out not only the Doug.class file but also a pljava.ddr file with your SQL function declaration already written correctly (no mixing up argument types!). There is a way to include that .ddr file into the jar you create so that you can just call sqlj.install_jar with the last parameter true so it runs the commands in the .ddr and your functions are ready to use. There's a Hello, world example in the docs that shows more of how it's done.
Cheers,
-Chap

fails to import module when using matlabdomain

While trying to use the sphinx matlab domain I can't get the MWE to work, provided on the extensions pypi site
There is always this Can't import module error. I'd guess, that the extension kind of generates pseudo modules from the m-code, but up to know I actually could not figure out, how this mechanism works.
The dir structure looks like this
root
|--test_data
| |--MyHandleClass.m
|
|--doc
|--------conf.py
|--------Makefile
|--------index.rst
The files MyHandleClass.m and index.rst contain the example code given on the package site and the conf.py starts like this
import sys, os
sys.path.append(os.path.abspath('.'))
sys.path.append(os.path.abspath('./test_data'))
# -- General configuration -----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
"sphinxcontrib.matlab",
"sphinx.ext.autosummary",
"sphinx.ext.autodoc"]
autodoc_default_flags = ['members','show-inheritance','undoc-members']
autoclass_content = 'both'
mathjax_path = 'http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=default'
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
Error msg
WARNING: autodoc: failed to import module u'test_data'; the following exception was raised:
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\sphinx\ext\autodoc.py", line 335, in import_object
__import__(self.modname)
ImportError: No module named test_data
E:\ME\doc\index.rst:13: WARNING: don't know which module to import for autodocumenting u'MyHandleClass' (try placing a "module" or "currentmodule" directive in the document, or giving an explicit module name)
After varying this and that maybe somebody out there has a clue?
Thanks for trying the matlabdomain sphinxcontrib extension. In order to use Sphinx to document MATLAB m-files, you need to add matlab_src_dir in conf.py as described in the Configuration section of the documenation. This is because the Python interpreter can't import a MATLAB m-file. Therefore you should not add your MATLAB root to the Python sys.path, or you will get the error you received. Instead set matlab_src_dir to the path containing the folder of your MATLAB project which you want to document.
Given your file structure, in order to document test_data use a conf.py with the following:
import os
# NOTE: don't add MATLAB m-files to `sys.path`
#sys.path.insert(0, os.path.abspath('.'))
# instead add them to `matlab_src_dir
matlab_src_dir = os.path.abspath('..') # MATLAB
Hope that does it! Please feel free to ask any more questions. I'm happy to help!

How to access static resources in jar (that correspond to src/main/resources folder)?

I have a Spark Streaming application built with Maven (as jar) and deployed with the spark-submit script. The application project layout follows the standard directory layout:
myApp
src
main
scala
com.mycompany.package
MyApp.scala
DoSomething.scala
...
resources
aPerlScript.pl
...
test
scala
com.mycompany.package
MyAppTest.scala
...
target
...
pom.xml
In the DoSomething.scala object I have a method (let's call it doSomething()) that tries to execute a Perl script -- aPerlScript.pl (from the resources folder) -- using scala.sys.process.Process and passing two arguments to the script (the first one is the absolute path to a binary file used as input, the second one is the path/name of the produced output file). I call then DoSomething.doSomething().
The issue is that I was not able to access the script, not with absolute paths, relative paths, getClass.getClassLoader.getResource, getClass.getResource, I have specified the resources folder in my pom.xml. None of my attempts succeeded. I don't know how to find the stuff I put in src/main/resources.
I will appreciate any help.
SIDE NOTES:
I use an external Process instead of a Spark pipe because, at this step of my workflow, I must handle binary files as input and output.
I'm using Spark-streaming 1.1.0, Scala 2.10.4 and Java 7. I build the jar with "Maven install" from within Eclipse (Kepler)
When I use the getClass.getClassLoader.getResource "standard" method to access resources I find that the actual classpath is the spark-submit script's one.
There are a few solutions. The simplest is to use Scala's process infrastructure:
import scala.sys.process._
object RunScript {
val arg = "some argument"
val stream = RunScript.getClass.getClassLoader.getResourceAsStream("aPerlScript.pl")
val ret: Int = (s"/usr/bin/perl - $arg" #< stream).!
}
In this case, ret is the return code for the process and any output from the process is directed to stdout.
A second (longer) solution is to copy the file aPerlScript.pl from the jar file to some temporary location and execute it from there. This code snippet should have most of what you need.
object RunScript {
// Set up copy destination from the Java temporary directory. This is /tmp on Linux
val destDir = System.getProperty("java.io.tmpdir") + "/"
// Get a stream to the script in the resources dir
val source = Channels.newChannel(RunScript.getClass.getClassLoader.getResourceAsStream("aPerlScript.pl"))
val fileOut = new File(destDir, "aPerlScript.pl")
val dest = new FileOutputStream(fileOut)
// Copy file to temporary directory
dest.getChannel.transferFrom(source, 0, Long.MaxValue)
source.close()
dest.close()
}
// Schedule the file for deletion for when the JVM quits
sys.addShutdownHook {
new File(destDir, "aPerlScript.pl").delete
}
// Now you can execute the script.
This approach allows you to bundle native libraries in JAR files. Copying them out allows the libraries to be loaded at runtime for whatever JNI mischief you have planned.