According to How is a view defined
Views and their dependencies can be defined only in the default namespace.
Also q has a command \b:
Syntax: \b [namespace]
Lists dependencies (views) in namespace. Defaults to current namespace.
According to this I guess that it is possible to create a view not only into the default namespace:
$ q
KDB+ 3.6 2019.04.02 Copyright (C) 1993-2019 Kx Systems
m32/ ...
q)\d .jar
q.jar)v::x+1
q.jar)\d .
q)`. `v
x+1
but the view was created in . namespace.
So is it possible to create a view in a non-default(current) namespace for somehow? If no, why is there an argument for command \b [namespace]?
The answer to your question depends on what you call a namespace. The official q documentation on this topic is vague if not misleading. For example, a page describing the system command \d reads:
\d (directory)
Syntax: \d [namespace]
Sets the current namespace (also known as directory or context). The
namespace can be empty, and a new namespace is created when an object
is defined in it. The prompt indicates the current namespace.
As you can see, the optional argument is called a directory on the first line but it becomes a namespace on the second. Which, as we learn from the third line, is "known as context."
However, the three words -- namespace, directory and context -- can be used interchangeably in some, but not all, cases. Defining a view is one such case where the distinction between directories and namespaces is important.
Due to the lack of clarity in the official terminology let me refer you to a great book "Q Tips: Fast, Scalable and Maintainable Kdb+" by Nick Psaris. Nick distinguishes a subset of namespaces that begin with a "." and calls them and only them directories. In his terminology all directories are namespaces, but not all namespaces are directories.
It turns out that directories have limitations; in particular, they can't contain views. But a less known fact is that namespaces that are not directories can:
q).my.dir.v::x+1 / a (failed) attempt to create a view v in a directory
'x
[0] .my.dir.v::x+1
q)my.ns.v1::x+1 / v1 is defined in a namespace
q)your.ns.v2::x-1 / so is v2
q)\b
`symbol$()
q)\b my.ns
,`v1
q)\b your.ns
,`v2
q)x:41
q)my.ns.v1
42
q)your.ns.v2
40
Related
I'm really lost on how to actually use multiple files in Coq. I was trying to follow these directions.
I have two files.
src/a.v:
Definition bar: nat := 1.
src/b.v:
Require Import a.
Definition foo := bar.
I attempt to compile as such:
coqc -R src "" src/a.v src/b.v
I get the following error:
user#machine:~/code/coq$ coqc -R src "" src/a.v src/b.v
While loading initial state:
Loading file /home/user/code/coq/src/.b.aux: aux file name mismatch
I can't find any clear information on how you actually compile with multiple files
I recommend you perform two calls to coqc, first with a then with b. Having multiple files in the argument command line is actually not supported [we will improve the interface in next versions as to warn about this]
I have two similar boards. I want to write a recipe for each of them. But they will have different kernel patches.How to do it better? Or Should I add new machines to the build?
I added my-machine to mylayer/local.conf
MACHINEOVERRIDES = "imx8qmmek:my-machine"
I created mylayer/recipex-kernel/linux/linux-imx_%.bbappend with my-patch:
SRC_URI_imx8qmmek += " file://0001-add-modified-dts.patch "
SRC_URI_imx8qmmek += " file://0002-EP4668-wifi-bt-modified-dts.patch "
SRC_URI_imx8qmmek += " file://0003-EP4822-enable-USB3-hub.patch "
SRC_URI_my-machine += " file://0004-EP4827-comment-usdhc3.tcu.patch "
SRC_URI_imx8qmmek += " file://EP4133_added_BRCM-PCIE.cfg"
do_configure_append_imx8qmmek() {
bbnote "adding BRCM-PCIE configuration ${PN}"
cat ../*.cfg >> ${B}/.config
}
And I run command:
MACHINE="my-machine" bitbake -c clean linux-imx
But a terminal outputed the error:
WARNING: Layer meta-mylayer should set LAYERSERIES_COMPAT_mylayer in its conf/layer.conf file to list the core layer names it is compatible with.
WARNING: Layer meta-mylayer should set LAYERSERIES_COMPAT_meta-mylayer in its conf/layer.conf file to list the core layer names it is compatible with.
WARNING: You have included the meta-gnome layer, but 'x11' has not been enabled in your DISTRO_FEATURES. Some bbappend files may not take effect. See the meta-gnome README for details on enabling meta-gnome support.
WARNING: Host distribution "ubuntu-18.04" has not been validated with this version of the build system; you may possibly experience unexpected failures. It is recommended that you use a tested distribution.
ERROR: OE-core's config sanity checker detected a potential misconfiguration.
Either fix the cause of this error or at your own risk disable the checker (see sanity.conf).
Following is the list of potential problems / advisories:
MACHINE=my-machine is invalid. Please set a valid MACHINE in your local.conf, environment or other configuration file.
Similar != identical. If they are indeed slightly different, then two machines is the way to go. If they are sufficiently similar (to be determined by yourself :) ), different distros is also an option. All depends on how different the machines are and how different the final images should be (might need two machines or two distros or both).
If you have two similar machines but need two machine configuration files, put most of the common code into an .inc required by both machines. Don't forget to put a MACHINEOVERRIDES somewhere in that inc file with a name that will make sense for both machines (e.g., if you have rpi3-lcd and rpi3-iot, have an rpi3-common.inc with rpi3-common added to MACHINEOVERRIDES). This will make it possible to use VAR_rpi3-common in recipes that have patches or machine specific stuff in your recipes to apply to both without needing VAR_rpi3-lcd AND VAR_rpi3-iot.
I want to ignore specifics files in subchart folder (because some objects, like secrets, are created by all my subchart, so duplicated...). I don't know the depth of these objects. So I want to use this syntax in .helmignore :
charts/**/myfile.yaml
But I got this error :
Error: double-star (**) syntax is not supported
How can I do that in helm 3 ?
Unfortunately, this feature doesn't supported nether in helm2 nor helm3.
helm2 source code: link
helm3 source code: link
Try to ignore it explicitly:
$ cat .helmignore
secrets
# or
./secrets/my-secret.yaml
I'm trying to understand how the Postgres 9.1 rpms are built on CentOS/RHEL 6, so I'm taking a look at the spec file from the source rpms.
What does the following syntax do/mean? Specifically, the question mark and exclamation point?
%{!?test:%define test 1}
%{!?plpython:%define plpython 1}
%{!?pltcl:%define pltcl 1}
%{!?plperl:%define plperl 1}
%{!?ssl:%define ssl 1}
%{!?intdatetimes:%define intdatetimes 1}
%{!?kerberos:%define kerberos 1}
%{!?nls:%define nls 1}
%{!?xml:%define xml 1}
%{!?pam:%define pam 1}
%{!?disablepgfts:%define disablepgfts 0}
%{!?runselftest:%define runselftest 0}
%{!?uuid:%define uuid 1}
%{!?ldap:%define ldap 1
I understand you can define a macro variable with %define <name>[(opts)] <value>, and I believe the exclamation mark is a logical negation operator. I can't find any info on the question mark or examples like the above though. Seems like some sort of test before defining the macro variable.
Here is a paste of the spec file.
Lets review a single item here:
%{!?plpython:%define plpython 1}
On line 102 we also see this:
%if %plpython
BuildRequires: python-devel
%endif
As you said, we know that this is a macro, that can also be confirmed via the Fedora docs. Now if we expand on our search into the Fedora documentation we find conditional macros. This states the following:
You can use a special syntax to test for the existence of macros. For example:
%{?macro_to_test: expression}
This syntax tells RPM to expand the expression if macro_to_test exists, otherwise ignore. A leading exclamation point, !, tests for the non-existence of a macro:
%{!?macro_to_test: expression}
In this example, if the macro_to_test macro does not exist, then expand the expression.
The Fedora docs have provided the answer, if the plpython macro doesn't exist, then
%define plython 1
If you look at line 38 you can also see this:
# In this file you can find the default build package list macros. These can be overridden by defining
# on the rpm command line:
# rpm --define 'packagename 1' .... to force the package to build.
# rpm --define 'packagename 0' .... to force the package NOT to build.
# The base package, the lib package, the devel package, and the server package always get built.
So if you don't define the the macro when you build the package (I imagine this is what most users would do) it's going to ensure that the buildrequires are properly configured for what appears to be a standard PostgreSQL installation.
How does one access command line flag (arguments) as environment variables in Erlang. (As flags, not ARGV) For example:
RabbitMQ cli looks something like:
erl \
...
-sasl errlog_type error \
-sasl sasl_error_logger '{file,"'${RABBITMQ_SASL_LOGS}'"}' \
... # more stuff here
If one looks at sasl.erl you see the line:
get_sasl_error_logger() ->
case application:get_env(sasl, sasl_error_logger) of
% ... etc
By some unknown magic the sasl_error_logger variable becomes an erlang tuple! I've tried replicating this in my own erlang application, but I seem to be only able to access these values via init:get_argument, which returns the value as a string.
How does one pass in values via the commandline and be able to access them easily as erlang terms?
UPDATE Also for anyone looking, to use environment variables in the 'regular' way use os:getenv("THE_VAR")
Make sure you set up an application configuration file
{application, fred,
[{description, "Your application"},
{vsn, "1.0"},
{modules, []},
{registered,[]},
{applications, [kernel,stdlib]},
{env, [
{param, 'fred'}
]
...
and then you can set your command line up like this:
-fred param 'billy'
I think you need to have the parameter in your application configuration to do this - I've never done it any other way...
Some more info (easier than putting it in a comment)
Given this
{emxconfig, {ets, [{keypos, 2}]}},
I can certainly do this:
{ok, {StorageType, Config}} = application:get_env(emxconfig),
but (and this may be important) my application is started at this time (may actually just need to be loaded and not actually started from looking at the application_controller code).