I am trying to set up a project with autoNumbered predicates. I couldn't use the lang:autoNumbered option in .logic files as it gave me the error that it expected a constraint or a lang:ordered.
So I rewrote my code in a .lb file, which worked. The code is as follows:
create --unique
addblock <doc>
node(n), node_id(n:id) -> int(id).
lang:autoNumbered(`node_id).
cons_node[] = n -> node(n).
lang:constructor(`cons_node).
node_has_label[l] = n -> string(l), node(n).
node_attribute[n, k] = v -> node(n), string(k), string(v).
node_attribute_id(id, att, val) <- node_id(n: id), node_attribute[n, att] = val.
</doc>
exec <doc>
+node(n), +cons_node[] = n,
+node_attribute[n, "label"] = "Person",
+node_attribute[n, "name"] = "Alice".
</doc>
echo --- node_att_table:
print node_attribute_id
close --destroy
Now I want to move this into a node.logic and a separate data file. How do I do this while keeping the lang:autoNumbered and lang:constructor commands?
EDIT:
This is the code that I have tried to run:
block(`node) {
export(`{
node(n), node_id(n:id) -> int(id).
lang:autoNumbered(`node_id).
cons_node[] = n -> node(n).
lang:constructor(`cons_node).
node_attribute(n, k; v) -> node(n), string(k), string(v).
})
} <-- .
And I get the error
error parsing block: expected a constraint or lang:ordering pragma (Error BLOCK_PARSE)
on the lang:autoNumbered and lang:constructor lines when I run lb config && make.
Extra info: I use Vagrant to run logicblox and am basing my examples on these blogs: https://developer.logicblox.com/2014/01/structuring-and-compiling-logicblox-applications/
I'm not sure what your original problem was, but this actually should work fine :). You should be able to put the logic in a .logic file and use the addblock --file option. The same applies to the exec logic. Using the tags versus separate files is basically equivalent. This should be identical to including it inline as you did there. If you want to load the data as a csv file, then this should work: https://developer.logicblox.com/content/docs4/core-reference/webhelp/predicates.html#file-predicates
Maybe you earlier tried it from the command-line and the back-tick caused some issues due to its special meaning in shell?
Related
I'm running into a problem with the following code and writing an excel file name. This code is driven by user defined inputs for location and chemical compound desired. The desired output is a file with the chemical compound and location appended for the name. The problem is that any compound with a . in it errors out. For example, if I want PM2.5 for site Kenny, the file name should be PM2.5 Kenny. The code however is recognizing ".5" as a file extension when this is to be part of the name. Any help how to get around this would be appreciated.
The error this gives is:
Unrecognized file extension '.5 Kenny'. Use the 'FileType' parameter to specify the file type.
j = 1
i = 1
while j <= width(c_locations_of_interest)
while i <= width(c_data_types_of_interest)
Value = c_data_types_of_interest{1,i}
Location = c_locations_of_interest{1,j}
output_excel_file = append(Value,' ',Location)
STATEMENTS
writetable(T, output_excel_file)
i = i + 1
end
i = 1
j = j + 1
end
I was able to reproduce your problem with a crude example. Indeed Matlab complains about file extensions. This is because I believe that you are missing the file extension, in your case '.xlsx'
Using sprintf instead of Append:
value = "PM2.5";
location = " Kenny"
extension = "xlsx";
output_excel_file= sprintf("%s%s.%s", value, location, extension);
writetable(T, output_excel_file);
Hope it is clear
EDIT: Corrected my own variables naming ¬¬
I'm trying to define an entity named isVector using the following syntax
Require Export Setoid.
Require Export Coq.Reals.Reals.
Require Export ArithRing.
Definition Point := Type.
Record MassPoint:Type:= cons{number : R ; point: Point}.
Variable add_MP : MassPoint -> MassPoint -> MassPoint .
Variable mult_MP : R -> MassPoint -> MassPoint .
Variable orthogonalProjection : Point -> Point -> Point -> Point.
Definition isVector (v:MassPoint):= exists A, B :Point , v= add_MP((−1)A)(1B).
And the Coq IDE keeps complaining that there's a syntax error for the definition. Currently, I haven't figured it out.
There are many problems here.
First, you'd write:
exists A B : Point, …
with no comma between the different variables.
But then, you also have syntax errors on the right-hand side. First, I'm not sure those 1 and -1 operations exist. Second, function calls would be written this way:
add_MP A B
You can write them the way you do:
add_MP(A)(B)
But in the long run you should probably become used to the syntax of function calls being just a whitespace! You might need to axiomatize this -1 operation the way you axiomatized other operations, unless they are a notation that you defined somewhere but did not post here.
Thanks for the help.
After experimenting a little bit. Below is the solution that works.
Definition Point:= Type.
Record massPoint: Type := cons{number: R; point: Point}.
Variable add_MP: massPoint -> massPoint -> massPoint.
Variable mult_MP: R -> massPoint -> massPoint.
Definition tp (p:Point) := cons (-1) p.
Definition isVector(v:massPoint):= exists A B : Point, v = add_MP(cons (-1) A)(cons 1 B).
I'm fixing a python script using h5py. It contains code like this:
hdf = h5py.File(hdf5_filename, 'a')
...
g = hdf.create_group('foo')
g.create_dataset('bar', ...whatever...)
Sometimes this runs on a file which already has a group named 'foo', in which case I see "ValueError: Unable to create group (Name already exists)"
One way to fix this is to replace the one simple line with create_group with four lines, like this:
if 'foo' in hdf.keys():
g = hdf['foo']
else:
g = hdf.create_group['foo']
g.create_dataset(...etc...)
Is there a neater way to do this, maybe in only one line? Like how with files in the standard C library, 'a' mode will either append to an existing file, or create a file if it's not already there.
Same goes for datasets - I have
create_dataset('bar', ...)
but should check first:
if 'bar' in g.keys():
d = g['bar']
else:
d = g.create_dataset('bar')
My wish: to find h5py has methods named create_or_use_group() and create_or_use_dataset(). What actually exists?
Yes: require_group and require_dataset:
with h5py.File("/tmp/so_hdf5/test2.h5", 'w') as f:
a = f.create_dataset('a',data=np.random.random((10, 10)))
with h5py.File("/tmp/so_hdf5/test2.h5", 'r+') as f:
a = f.require_dataset('a', shape=(10, 10), dtype='float64')
d = f.require_dataset('d', shape=(20, 20), dtype='float64')
g = f.require_group('foo')
print(a)
print(d)
print(g)
Note that you do need to know the shape and dtype of the dataset, otherwise require_dataset throws a TypeError. In that case, you could do something like:
try:
a = f.require_dataset('a', shape=(10, 10), dtype='float64')
except TypeError:
a = f['a']
If you don't already know the shape and dtype, I don't think there's much advantage to require_dataset over using try ... except ...
As usual, I got some SPSS file that I've imported into R with spss.get function from Hmisc package. I'm bothered with labelled class that Hmisc::spss.get adds to all variables in data.frame, hence want to remove it.
labelled class gives me headaches when I try to run ggplot or even when I want to do some menial analysis! One solution would be to remove labelled class from each variable in data.frame. How can I do that? Is that possible at all? If not, what are my other options?
I really want to bypass reediting variables "from scratch" with as.data.frame(lapply(x, as.numeric)) and as.character where applicable... And I certainly don't want to run SPSS and remove labels manually (don't like SPSS, nor care to install it)!
Thanks!
Here's how I get rid of the labels altogether. Similar to Jyotirmoy's solution but works for a vector as well as a data.frame. (Partial credits to Frank Harrell)
clear.labels <- function(x) {
if(is.list(x)) {
for(i in 1 : length(x)) class(x[[i]]) <- setdiff(class(x[[i]]), 'labelled')
for(i in 1 : length(x)) attr(x[[i]],"label") <- NULL
}
else {
class(x) <- setdiff(class(x), "labelled")
attr(x, "label") <- NULL
}
return(x)
}
Use as follows:
my.unlabelled.df <- clear.labels(my.labelled.df)
EDIT
Here's a bit of a cleaner version of the function, same results:
clear.labels <- function(x) {
if(is.list(x)) {
for(i in seq_along(x)) {
class(x[[i]]) <- setdiff(class(x[[i]]), 'labelled')
attr(x[[i]],"label") <- NULL
}
} else {
class(x) <- setdiff(class(x), "labelled")
attr(x, "label") <- NULL
}
return(x)
}
A belated note/warning regarding class membership in R objects. The correct method for identification of "labelled" is not to test for with an is function or equality {==) but rather with inherits. Methods that test for a specific location will not pick up cases where the order of existing classes are not the ones assumed.
You can avoid creating "labelled" variables in spss.get with the argument: , use.value.labels=FALSE.
w <- spss.get('/tmp/my.sav', use.value.labels=FALSE, datevars=c('birthdate','deathdate'))
The code from Bhattacharya could fail if the class of the labelled vector were simply "labelled" rather than c("labelled", "factor") in which case it should have been:
class(x[[i]]) <- NULL # no error from assignment of empty vector
The error you report can be reproduced with this code:
> b <- 4:6
> label(b) <- 'B Label'
> str(b)
Class 'labelled' atomic [1:3] 4 5 6
..- attr(*, "label")= chr "B Label"
> class(b) <- class(b)[-1]
Error in class(b) <- class(b)[-1] :
invalid replacement object to be a class string
You can try out the read.spss function from the foreign package.
A rough and ready way to get rid of the labelled class created by spss.get
for (i in 1:ncol(x)) {
z<-class(x[[i]])
if (z[[1]]=='labelled'){
class(x[[i]])<-z[-1]
attr(x[[i]],'label')<-NULL
}
}
But can you please give an example where labelled causes problems?
If I have a variable MAED in a data frame x created by spss.get, I have:
> class(x$MAED)
[1] "labelled" "factor"
> is.factor(x$MAED)
[1] TRUE
So well-written code that expects a factor (say) should not have any problems.
Suppose:
library(Hmisc)
w <- spss.get('...')
You could remove the labels of a variable called "var1" by using:
attributes(w$var1)$label <- NULL
If you also want to remove the class "labbled", you could do:
class(w$var1) <- NULL
or if the variable has more than one class:
class(w$var1) <- class(w$var1)[-which(class(w$var1)=="labelled")]
Hope this helps!
Well, I figured out that unclass function can be utilized to remove classes (who would tell, aye?!):
library(Hmisc)
# let's presuppose that variable x is gathered through spss.get() function
# and that x is factor
> class(x)
[1] "labelled" "factor"
> foo <- unclass(x)
> class(foo)
[1] "integer"
It's not the luckiest solution, just imagine back-converting bunch of vectors... If anyone tops this, I'll check it as an answer...
I'd like to start writing some unit tests for my Mathematica programs and control everything from the command line with some Makefiles.
It seems like Mathematica can be run from the command line but I can't see any basic instructions on getting started with doing this on Mac OS X — has anyone done this before?
Update:
Creating a test file like this:
Print["hello"];
x := 1;
y = x+1;
z = y+1;
Print["y="ToString#y];
Print["z="ToString#z];
Quit[];
And running it with
/Applications/Mathematica.app/Contents/MacOS/MathKernel -noprompt < test.m
is the closest I can get to some sort of batch processing. The output looks ugly, though; newlines are added for every line of the script!
"hello"
"y=2"
"z=3"
Is this the closest thing I can get to a script that can still output information to the console output? I'm only using Mathematica 6, but I hope that doesn't make a difference.
This, finally, gives output like I'd expect it to:
/Applications/Mathematica.app/Contents/MacOS/MathKernel -noprompt -run "<<test.m"
Makes sense, I suppose. Adding this to my .bash_profile allows easy execution (as in mma test.m):
mma () { /Applications/Mathematica.app/Contents/MacOS/MathKernel -noprompt -run "<<$1" ; }
See also dreeves's mash Perl script, which may offer advantages over this approach.
With some experimentation, I found that /Applications/Mathematica.app/Contents/MacOS/MathKernel can be launched from the command-line. It doesn't seem to accept the usual -h or --help command-line flags, though.
Thanks to Pillsy and Will Robertson for the MASH plug! Here's the relevant StackOverflow question: Call a Mathematica program from the command line, with command-line args, stdin, stdout, and stderr
If you don't use MASH, you may want to use the following utility functions that MASH defines.
For example, the standard Print will print strings with quotation marks -- not usually what you want in scripts.
ARGV = args = Drop[$CommandLine, 4]; (* Command line args. *)
pr = WriteString["stdout", ##]&; (* More *)
prn = pr[##, "\n"]&; (* convenient *)
perr = WriteString["stderr", ##]&; (* print *)
perrn = perr[##, "\n"]&; (* statements. *)
EOF = EndOfFile; (* I wish mathematica *)
eval = ToExpression; (* weren't so damn *)
re = RegularExpression; (* verbose! *)
read[] := InputString[""]; (* Grab a line from stdin. *)
doList[f_, test_] := (* Accumulate list of what f[] *)
Most#NestWhileList[f[]&, f[], test]; (* returns while test is true. *)
readList[] := doList[read, #=!=EOF&]; (* Slurp list'o'lines from stdin *)
To use MASH, just grab that perl file, mash.pl, and then make your test.m like the following:
#!/usr/bin/env /path/to/mash.pl
prn["hello"];
x := 1;
y = x+1;
z = y+1;
prn["y=", y];
prn["z=", z];