Failing to compile MonoGame on Boo compiler - monogame

SharpDevelop compiles fine, but trying to compile through booc doesn't work.
Boo Compiler version 0.9.4.9 (CLR 2.0.50727.8000)
Program.boo(4,8): BCE0021: Namespace 'Microsoft.Xna.Framework' not found, maybe
you forgot to add an assembly reference?
booc -resource:"C:\test\" Program.boo , the command used in Windows cmd tool.
Thank you. Alisa.

Sharpdevelop most likely already references the libraries to the compiler for you. What that means is, when you manually invoke the booc command line compiler, you will have to tell the compiler where exactly the MonoGame library is. I haven't been able to check myself yet, but I did have a quick look at the command lines and the Boo source code, and I think you have to do the following:
-lib:C:/Path/To/MonoGame/Libraries
This will tell the compiler where to look for additional libraries.
The next thing you should then do is add the libraries you want, eg:
-r:Microsoft.Xna.Framework.dll,Microsoft.Xna.Framework.Game.dll
Add these two additional compiler options to your command line and I think it should work.
I haven't compiled this myself really, because I found it rather tedious. Instead, I decided to create my own build script in Boo itself in order to compile my Boo programs. That way I still had to add the library path and reference the libraries, as in the following snippet:
def CompileEngine() as bool:
"""Compiles the engine and puts it in ./lib/SpectralEngine.dll"""
compiler = BooCompiler()
compiler.Parameters.Pipeline = CompileToFile()
compiler.Parameters.OutputType = CompilerOutputType.Library
compiler.Parameters.Ducky = true
compiler.Parameters.LibPaths.Add("./lib")
compiler.Parameters.OutputAssembly = "./lib/SpectralEngine.dll"
# Add libraries.
compiler.Parameters.References.Add(compiler.Parameters.LoadAssembly("OpenTK.dll"))
compiler.Parameters.References.Add(compiler.Parameters.LoadAssembly("NAudio.dll"))
compiler.Parameters.References.Add(compiler.Parameters.LoadAssembly("Boo.Lang.dll"))
compiler.Parameters.References.Add(compiler.Parameters.LoadAssembly("Boo.Lang.Parser.dll"))
compiler.Parameters.References.Add(compiler.Parameters.LoadAssembly("Boo.Lang.Compiler.dll"))
# Take all boo files from the Engine source directory.
files = (Directory.GetFiles(Directory.GetCurrentDirectory() + """/src/Engine""", "*.boo", SearchOption.AllDirectories)
.Where({file as string | return not file.Contains("Gwen")})) # Filter out old GWEN files.
for file in files:
print "Adding file: " + file
compiler.Parameters.Input.Add(FileInput(file))
print "Compiling to ./lib/SpectralEngine.dll"
context = compiler.Run()
if context.GeneratedAssembly is null:
print "Failed to compile:\n" + join(e for e in context.Errors, "\n")
return false
else:
return true
I've put two of these build scripts at here and here. It's perhaps a bit overkill for you, though.

Related

How can I get "HelloWorld - BitBake Style" working on a newer version of Yocto?

In the book "Embedded Linux Systems with the Yocto Project", Chapter 4 contains a sample called "HelloWorld - BitBake style". I encountered a bunch of problems trying to get the old example working against the "Sumo" release 2.5.
If you're like me, the first error you encountered following the book's instructions was that you copied across bitbake.conf and got:
ERROR: ParseError at /tmp/bbhello/conf/bitbake.conf:749: Could not include required file conf/abi_version.conf
And after copying over abi_version.conf as well, you kept finding more and more cross-connected files that needed to be moved, and then some relative-path errors after that... Is there a better way?
Here's a series of steps which can allow you to bitbake nano based on the book's instructions.
Unless otherwise specified, these samples and instructions are all based on the online copy of the book's code-samples. While convenient for copy-pasting, the online resource is not totally consistent with the printed copy, and contains at least one extra bug.
Initial workspace setup
This guide assumes that you're working with Yocto release 2.5 ("sumo"), installed into /tmp/poky, and that the build environment will go into /tmp/bbhello. If you don't the Poky tools+libraries already, the easiest way is to clone it with:
$ git clone -b sumo git://git.yoctoproject.org/poky.git /tmp/poky
Then you can initialize the workspace with:
$ source /tmp/poky/oe-init-build-env /tmp/bbhello/
If you start a new terminal window, you'll need to repeat the previous command which will get get your shell environment set up again, but it should not replace any of the files created inside the workspace from the first time.
Wiring up the defaults
The oe-init-build-env script should have just created these files for you:
bbhello/conf/local.conf
bbhello/conf/templateconf.cfg
bbhello/conf/bblayers.conf
Keep these, they supersede some of the book-instructions, meaning that you should not create or have the files:
bbhello/classes/base.bbclass
bbhello/conf/bitbake.conf
Similarly, do not overwrite bbhello/conf/bblayers.conf with the book's sample. Instead, edit it to add a single line pointing to your own meta-hello folder, ex:
BBLAYERS ?= " \
${TOPDIR}/meta-hello \
/tmp/poky/meta \
/tmp/poky/meta-poky \
/tmp/poky/meta-yocto-bsp \
"
Creating the layer and recipe
Go ahead and create the following files from the book-samples:
meta-hello/conf/layer.conf
meta-hello/recipes-editor/nano/nano.bb
We'll edit these files gradually as we hit errors.
Can't find recipe error
The error:
ERROR: BBFILE_PATTERN_hello not defined
It is caused by the book-website's bbhello/meta-hello/conf/layer.conf being internally inconsistent. It uses the collection-name "hello" but on the next two lines uses _test suffixes. Just change them to _hello to match:
# Set layer search pattern and priority
BBFILE_COLLECTIONS += "hello"
BBFILE_PATTERN_hello := "^${LAYERDIR}/"
BBFILE_PRIORITY_hello = "5"
Interestingly, this error is not present in the printed copy of the book.
No license error
The error:
ERROR: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb: This recipe does not have the LICENSE field set (nano)
ERROR: Failed to parse recipe: /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
Can be fixed by adding a license setting with one of the values that bitbake recognizes. In this case, add a line onto nano.bb of:
LICENSE="GPLv3"
Recipe parse error
ERROR: ExpansionError during parsing /tmp/bbhello/meta-hello/recipes-editor/nano/nano.bb
[...]
bb.data_smart.ExpansionError: Failure expanding variable PV_MAJOR, expression was ${#bb.data.getVar('PV',d,1).split('.')[0]} which triggered exception AttributeError: module 'bb.data' has no attribute 'getVar'
This is fixed by updating the special python commands being used in the recipe, because #bb.data was deprecated and is now removed. Instead, replace it with #d, ex:
PV_MAJOR = "${#d.getVar('PV',d,1).split('.')[0]}"
PV_MINOR = "${#d.getVar('PV',d,1).split('.')[1]}"
License checksum failure
ERROR: nano-2.2.6-r0 do_populate_lic: QA Issue: nano: Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]
This can be fixed by adding a directive to the recipe telling it what license-info-containing file to grab, and what checksum we expect it to have.
We can follow the way the recipe generates the SRC_URI, and modify it slightly to point at the COPYING file in the same web-directory. Add this line to nano.bb:
LIC_FILES_CHKSUM = "${SITE}/v${PV_MAJOR}.${PV_MINOR}/COPYING;md5=f27defe1e96c2e1ecd4e0c9be8967949"
The MD5 checksum in this case came from manually downloading and inspecting the matching file.
Done!
Now bitbake nano ought to work, and when it is complete you should see it built nano:
/tmp/bbhello $ find ./tmp/deploy/ -name "*nano*.rpm*"
./tmp/deploy/rpm/i586/nano-dbg-2.2.6-r0.i586.rpm
./tmp/deploy/rpm/i586/nano-dev-2.2.6-r0.i586.rpm
I have recently worked on that hands-on hello world project. As far as I am concerned, I think that the source code in the book contains some bugs. Below there is a list of suggested fixes:
Inheriting native class
In fact, when you build with bitbake that you got from poky, it builds only for the target, unless you mention in your recipe that you are building for the host machine (native). You can do the latter by adding this line at the end of your recipe:
inherit native
Adding license information
It is worth mentioning that the variable LICENSE is important to be set in any recipe, otherwise bitbake rises an error. In our case, we try to build the version 2.2.6 of the nano editor, its current license is GPLv3, hence it should be mentioned as follow:
LICENSE = "GPLv3"
Using os.system calls
As the book states, you cannot dereference metadata directly from a python function. Which means it is mandatory to access metadata through the d dictionary. Bellow, there is a suggestion for the do_unpack python function, you can use its concept to code the next tasks (do_configure, do_compile):
python do_unpack() {
workdir = d.getVar("WORKDIR", True)
dl_dir = d.getVar("DL_DIR", True)
p = d.getVar("P", True)
tarball_name = os.path.join(dl_dir, p+".tar.gz")
bb.plain("Unpacking tarball")
os.system("tar -x -C " + workdir + " -f " + tarball_name)
bb.plain("tarball unpacked successfully")
}
Launching the nano editor
After successfully building your nano editor package, you can find your nano executable in the following directory in case you are using Ubuntu (arch x86_64):
./tmp/work/x86_64-linux/nano/2.2.6-r0/src/nano
Should you have any comments or questions, Don't hesitate !

Cudafy chapter 3 example has path issue how to fix?

Using Cudafy version 1.29, which can be downloaded from here
I am executing the examples that are found in the install folder CudafyV1.29\CudafyByExample\
Specifically, "chapter 3" example that begins line 42 of program.cs calls the following:
simple_kernel.Execute();
which is this:
public static void Execute()
{
CudafyModule km = CudafyTranslator.Cudafy(); // <--exception thrown!
GPGPU gpu = CudafyHost.GetDevice(CudafyModes.Target, CudafyModes.DeviceId);
gpu.LoadModule(km);
gpu.Launch().thekernel(); // or gpu.Launch(1, 1, "kernel");
Console.WriteLine("Hello, World!");
}
The indicated line throws this exception:
Compilation error: CUDAFYSOURCETEMP.cu
'C:\Program' is not recognized as an internal or external command,
operable program or batch file. .
Which is immediately obvious that the path has spaces and the programmer did not double quote or use ~ to make it operational.
So, I did not write this code. And I cannot step through the sealed code contained within CudafyModule km = CudafyTranslator.Cudafy();In fact I don't even know the full path that is causing the exception, it is cut-off in the exception message.
Does anyone have a suggestion for how to fix this issue?
Update #1: I discovered where CUDAFYSOURCETEMP.cu lives on my computer, here it is:
C:\Users\humphrt\Desktop\Active Projects\Visual Studio
Projects\CudafyV1.29\CudafyByExample\bin\Debug
...I'm still trying to determine what the program is looking for along the path to 'C:\Program~'.
I was able to apply a workaround to bypass this issue. The workaround is to reinstall all components of cudafy in to folders with paths with no ' ' (spaces). My setup looks like the below screenshot. Notice that I also installed the CUDA TOOLKIT from NVIDIA in the same folder - also with no spaces in folder names.
I created a folder named "C:\CUDA" and installed all components within it, here is the folder structure:

Why can't I debug my puppet code?

I'm learning Puppet and the biggest frustration I have with the entire paradigm is the try/run/fix development process I'm using to build functional Puppet code. My background is in Java and I'm naturally use to debugging my code to find errors instead of just running the program to see where it bombs making development much faster but I can't seem to find a way to do this using Puppet and Eclipse. I know writing a debugger for Puppet would require some creativity given its nature but I think this is something the community could really benefit from.
I've written debuggers and know the Eclipse SDK but unfortunately it does not map cleanly to the Puppet architecture which is a bit awkward in the sense its runtime stack and execution flow does not happen in natural order as well as the fact the runtime requires a target machine to apply changes on.
I'm curious if the community has done any development work on trying to create some kind of debugger where code can be stepped. To write this I think it would make sense to extend Eclipse with a new Puppet debug configuration type where you specify a target sandbox host to test your code as well as a puppet project in your workspace you want to debug (leveraging existing Gepetto tooling). Then when you start a new Puppet debugging session Eclipse could connect to the remote host, execute puppet apply with some additional debug arguments and somehow provide feedback from the runtime about what line of code is currently being executed.
This still might be awkward but would allow puppet developers to quickly see things like oh duh.. I can't create this directory because the parent path does not exist, wait... why is this if statement not going here like I planned, oh I see here that Puppet is not very clear on single or double quotes or now I see why this fails because this class was not executed first etc. etc.
Instead all we get is a big ugly output on the agent console that yes can give us insight on errors but does not cleanly map exceptions to our code that in my view shows an underlying pain and weakness of Puppet, can you at least give me a stack trace and line number so I know where to look? Nope sorry.
Don't get me wrong, I love how Puppet can make me look very productive throughout the work week when all I'm doing is running Puppet apply on new machines which my manager has not yet figured out but I think for Puppet to really be useful this lack of debugging support is something that needs to be addressed.
Does anyone else feel this pain? - Duncan Krebs
It would be impossible to "step through" puppet code, unless you want to debug against the ruby codebase itself. It's not just that the order of "execution" is unclear, its that the manifest themselves are never executed at a single time. They are actually evaluated in multiple phases throughout execution.
There are ways to simplify finding problems though. The biggest one is writing unit tests using rspec-puppet. It lets you essentially test the compilation phase of puppet, helping you catch errors like circular dependencies, incorrect conditional logic, etc.
There is a new tool called the puppet-debugger which allows you to set breakpoints in your puppet code in order to step through it. So this is no longer "impossible" as it has been available for about 8 months.
You will first need to install the puppet-debugger gem
https://github.com/nwops/puppet-debugger
Then install the debug module, include it in your fixtures or just ensure it is in your module path.
https://forge.puppet.com/nwops/debug
Then just set a breakpoint in your code using debug::break() function.
Ruby Version: 2.0.0
Puppet Version: 4.9.4
Puppet Debugger Version: 0.6.0
Created by: NWOps
Type "exit", "functions", "vars", "krt", "whereami", "facts", "resources", "classes",
"play", "classification", "types", "datatypes", "reset", or "help" for more information.
>> $vars = ['one', 'two', 'three']
=> [
[0] "one",
[1] "two",
[2] "three"
]
>> $vars.each | String $var | {
debug::break()
notify{$var:}
}
From file: puppet_debugger_input20170417-97123-qjwbaj.pp
1: $vars.each | String $var | {
=> 2: debug::break()
3: notify{$var:}
4: }
1:>> $var
=> "one"
2:>> exit
From file: puppet_debugger_input20170417-97123-qjwbaj.pp
1: $vars.each | String $var | {
=> 2: debug::break()
3: notify{$var:}
4: }
1:>> $var
=> "two"
2:>> exit
From file: puppet_debugger_input20170417-97123-qjwbaj.pp
1: $vars.each | String $var | {
=> 2: debug::break()
3: notify{$var:}
4: }
1:>> $var
=> "three"

How to do File creation and manipulation in functional style?

I need to write a program where I run a set of instructions and create a file in a directory. Once the file is created, when the same code block is run again, it should not run the same set of instructions since it has already been executed before, here the file is used as a guard.
var Directory: String = "Dir1"
var dir: File = new File("Directory");
dir.mkdir();
var FileName: String = Directory + File.separator + "samplefile" + ".log"
val FileObj: File = new File(FileName)
if(!FileObj.exists())
// blahblah
else
{
// set of instructions to create the file
}
When the programs runs initially, the file won't be present, so it should run the set of instructions in else and also create the file, and after the first run, the second run it should exit since the file exists.
The problem is that I do not understand new File, and when the file is created? Should I use file.CreateNewFile? Also, how to write this in functional style using case?
It's important to understand that a java.io.File is not a physical file on the file system, but a representation of a pathname -- per the javadoc: "An abstract representation of file and directory pathnames". So new File(...) has nothing to do with creating an actual file - you are just defining a pathname, which may or may not correspond to an existing file.
To create an empty file, you can use:
val file = new File("filepath/filename")
file.createNewFile();
If running on JRE 7 or higher, you can use the new java.nio.file API:
val path = Paths.get("filepath/filename")
Files.createFile(path)
If you're not happy with the default IO APIs, you an consider a number of alternative. Scala-specific ones that I know of are:
scala-io
rapture.io
Or you can use libraries from the Java world, such as Google Guava or Apache Commons IO.
Edit: One thing I did not consider initially: I understood "creating a file" as "creating an empty file"; but if you intend to write something immediately in the file, you generally don't need to create an empty file first.

Automake, generated source files and VPATH builds

I'm doing VPATH builds with automake. I'm now also using generated source, with SWIG. I've got rules in Makefile.am like:
dist_noinst_DATA = whatever.swig
whatever.cpp: whatever.swig
swig -c++ -php $^
Then the file gets used later:
myprogram_SOURCES = ... whatever.cpp
It works fine when $builddir == $srcdir. But when doing VPATH builds (e.g. mkdir build; cd build; ../configure; make), I get error messages about missing whatever.cpp.
Should generated source files go to $builddir or $srcdir? (I reckon probably $builddir.)
How should dependencies and rules be specified to put generated files in the right place?
Simple answer
You should assume that $srcdir is a read-only, so you must not write anything there.
So, your generated source-code will end up in $(builddir).
By default, autotool-generated Makefiles will only look for source-files in $srcdir, so you have to tell it to check $builddir as well. Adding the following to your Makefile.am should help:
VPATH = $(srcdir) $(builddir)
After that you might end up with a no rule to make target ... error, which you should be able to fix by updating your source-generating rule as in:
$(builddir)/whatever.cpp: whatever.swig
# ...
A better solution
You might notice that in your current setup, the release tarball (as created by make dist) will contain the whatever.cpp file as part of your sources, since you added this file to the myprogram_SOURCES.
If you don't want this (e.g. because it might mean that the build-process will really take the pregenerated file rather than generating it again), you might want to use something like the following.
It uses a wrapper source-file (whatever_includer.cpp) that simply includes the generated file, and it uses -I$(builddir) to then find the generated file.
Makefile.am:
dist_noinst_DATA = whatever.swig
whatever.cpp: whatever.swig
swig -c++ -php $^
whatever_includer.cpp: whatever.cpp
myprogram_SOURCES = ... whatever_includer.cpp
myprogram_CPPFLAGS = ... -I$(builddir)
clean-local::
rm -f $(builddir)/whatever.cpp
whatever_includer.cpp:
#include "whatever.cpp"
Usually, you want to keep $srcdir readonly, so that if for instance the source is distributed unpacked on a CDROM, you can still run /.../configure from some other part of the file-system.
However if you are using SWIG to generate source code for a wrapper library, you probably want to distribute that SWIG-generated code as well so that your users do not need to install SWIG to compile your code. Then you have indeed a choice: you can decide that the SWIG-generated code should end in $builddir (it's OK: make dist will collect it there and include it in the tarball), or you could decide to output SWIG-generated code in $srcdir since it is really a source from the point of view of the distributed package. An advantage of keeping it in $srcdir is that when make distcheck attempts to build your package from a read-only source directory, it will fail on any attempt to call SWIG to regenerate the wrapper source. If you have your wrapper source in $builddir, you might not notice you have some broken rule that cause SWIG to be run on the user's host; by generating in $srcdir you ensure that SWIG is not needed by your users.
So my preference is to output SWIG wrapper sources in $srcdir. My setup for Python wrappers looks as follows:
EXTRA_DIST = spot.i
python_PYTHON = $(srcdir)/spot.py # _PYTHON is distributed by default
pyexec_LTLIBRARIES = _spot.la
MAINTAINERCLEANFILES = $(srcdir)/spot_wrap.cxx $(srcdir)/spot.py
_spot_la_SOURCES = $(srcdir)/spot_wrap.cxx $(srcdir)/spot_wrap.h
_spot_la_LDFLAGS = -avoid-version -module
_spot_la_LIBADD = $(top_builddir)/src/libspot.la
$(srcdir)/spot_wrap.cxx: $(srcdir)/spot.i
$(SWIG) -c++ -python -I$(srcdir) -I$(top_srcdir)/src $(srcdir)/spot.i
# Handle the multi-file output of SWIG.
$(srcdir)/spot.py: $(srcdir)/spot.i
$(MAKE) $(AM_MAKEFLAGS) spot_wrap.cxx
Note that I use $(srcdir) for all targets, because of limitations of the VPATH feature on various flavors of make. My setup to deal with the multiple files output by SWIG could be improved, but as these rules are not run by users and it has never caused me any problem, I do not bother.