I'm attempting to do some alias analysis & other memory inspection. I've written a pointless AliasAnalysis pass (that says everything must alias) to attempt to verify that my pass is getting picked up & run by opt.
I run opt with: opt -load ~/Applications/llvm/lib/MustAA.so -must-aa -aa-eval -debug < trace0.ll -debug-pass=Structure
I see my pass being initialized, but never being called (I see only may alias results).
Any ideas as to what to do to debug this? Or what I'm missing? I've read through http://llvm.org/docs/AliasAnalysis.html and don't see anything that I'm missing.
Here's the full source code of my pass:
#define DEBUG_TYPE "must-aa"
#include "llvm/Pass.h"
#include "llvm/Analysis/AliasAnalysis.h"
#include "llvm/Support/raw_ostream.h"
#include "llvm/Support/Debug.h"
using namespace llvm;
namespace {
struct EverythingMustAlias : public ImmutablePass, public AliasAnalysis {
static char ID;
EverythingMustAlias() : ImmutablePass(ID) {}
virtual void *getAdjustedAnalysisPointer(AnalysisID ID) {
errs() << "called getAdjustedAnalysisPointer with " << ID << "\n";
if (ID == &AliasAnalysis::ID)
return (AliasAnalysis*)this;
return this;
}
virtual void initializePass() {
DEBUG(dbgs() << "Initializing everything-must-alias\n");
InitializeAliasAnalysis(this);
}
virtual void getAnalysisUsage(AnalysisUsage &AU) const {
AliasAnalysis::getAnalysisUsage(AU);
AU.setPreservesAll();
}
virtual AliasResult alias(const Location &LocA, const Location &LocB) {
DEBUG(dbgs() << "Everything must alias!\n");
return AliasAnalysis::MustAlias;
}
};
}
namespace llvm {
void initializeEverythingMustAliasPass(PassRegistry &Registry);
}
char EverythingMustAlias::ID = 0;
static RegisterPass<EverythingMustAlias> A("must-aa", "Everything must alias");
INITIALIZE_AG_PASS(EverythingMustAlias, AliasAnalysis, "must-aa",
"Everything must alias", false, true, false)
Running opt as above produces:
Args: opt -load /home/moconnor/Applications/llvm/lib/MustAA.so -must-aa -aa-eval -debug -debug-pass=Structure
WARNING: You're attempting to print out a bitcode file.
This is inadvisable as it may cause display problems. If
you REALLY want to taste LLVM bitcode first-hand, you
can force output with the `-f' option.
Subtarget features: SSELevel 8, 3DNowLevel 0, 64bit 1
Initializing everything-must-alias
Pass Arguments: -targetlibinfo -datalayout -notti -basictti -x86tti -no-aa -must-aa -aa-eval -preverify -domtree -verify
Target Library Information
Data Layout
No target information
Target independent code generator's TTI
X86 Target Transform Info
No Alias Analysis (always returns 'may' alias)
Everything must alias
ModulePass Manager
FunctionPass Manager
Exhaustive Alias Analysis Precision Evaluator
Preliminary module verification
Dominator Tree Construction
Module Verifier
===== Alias Analysis Evaluator Report =====
163 Total Alias Queries Performed
0 no alias responses (0.0%)
163 may alias responses (100.0%)
0 partial alias responses (0.0%)
0 must alias responses (0.0%)
Alias Analysis Evaluator Pointer Alias Summary: 0%/100%/0%/0%
168 Total ModRef Queries Performed
0 no mod/ref responses (0.0%)
0 mod responses (0.0%)
0 ref responses (0.0%)
168 mod & ref responses (100.0%)
Alias Analysis Evaluator Mod/Ref Summary: 0%/0%/0%/100%
Note the 163 may alias responses when my pass is returning MustAlias.
Edit: On a suggestion on the mailing list, I added the following member function since my pass uses multiple inheritance. It doesn't seem to change anything or get called.
virtual void *getAdjustedAnalysisPointer(AnalysisID ID) {
errs() << "called getAdjustedAnalysisPointer with " << ID << "\n";
if (ID == &AliasAnalysis::ID)
return (AliasAnalysis*)this;
return this;
}
I changed:
static RegisterPass<EverythingMustAlias> A("must-aa", "Everything must alias");
INITIALIZE_AG_PASS(EverythingMustAlias, AliasAnalysis, "must-aa",
"Everything must alias", false, true, false)
to
static RegisterPass<EverythingMustAlias> X("must-aa", "Everything must alias", false, true);
static RegisterAnalysisGroup<AliasAnalysis> Y(X);
Apparently INITIALIZE_AG_PASS only defines the registration function & so is only useful for a pass that is statically linked into an LLVM executable (or something). RegisterAnalysisGroup is run when this module is dynamically linked in so it is then registered.
Related
I wanted to use eBPF's latest map, BPF_MAP_TYPE_RINGBUF, but I can't find much information online on how I can use it, so I am just doing some trial-and-error here. I defined and used it like this:
struct bpf_map_def SEC("maps") r_buf = {
.type = BPF_MAP_TYPE_RINGBUF,
.max_entries = 1 << 2,
};
SEC("lsm/task_alloc")
int BPF_PROG(task_alloc, struct task_struct *task, unsigned long clone_flags) {
uint32_t pid = task->pid;
bpf_ringbuf_output(&r_buf, &pid, sizeof(uint32_t), 0); //stores the pid value to the ring buffer
return 0;
}
But I got the following error when running:
libbpf: map 'r_buf': failed to create: Invalid argument(-22)
libbpf: failed to load object 'bpf_example_kern'
libbpf: failed to load BPF skeleton 'bpf_example_kern': -22
It seems like libbpf does not recognize BPF_MAP_TYPE_RINGBUF? I cloned the latest libbpf from GitHub and did make and make install. I am using Linux 5.8.0 kernel.
UPDATE: The issue seems to be resolved if I changed the max_entries to something like 4096 * 64, but I don't know why this is the case.
You are right, the problem is in the size of BPF_MAP_TYPE_RINGBUF (max_entries attribute in libbpf map definition). It has to be a multiple of a memory page (which is 4096 bytes at least on most popular platforms). So that explains why it all worked when you specified 64 * 4096.
BTW, if you'd like to see some examples of using it, I'd start with BPF selftests:
user-space part: https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/prog_tests/ringbuf.c
kernel (BPF) part: https://github.com/torvalds/linux/blob/master/tools/testing/selftests/bpf/progs/test_ringbuf.c
Failing to use existing rte Hash from secondary process:
h = rte_hash_find_existing("some_hash");
if (h) {
// this will work, in case we re-create
//rte_hash_free(h);
}
else {
h = rte_hash_create (¶ms);
}
// using the hash will crash the process with:
// Program received signal SIGSEGV, Segmentation fault.
ret = rte_hash_lookup_data (h,name,&data);
DPDK Version: dpdk-19.02
Build Mode Static: CONFIG_RTE_BUILD_SHARED_LIB=n
The Primary and secondary processes are different binaries but linked to the same DPDK library
The Key is added in primary as follows
struct cdev_key {
uint64_t len;
};
struct cdev_key key = { 0 };
if (rte_hash_add_key_data (testptr, &key,(void *) &test) < 0) {
fprintf (stderr,"add failed errno: %s\n", rte_strerror(rte_errno));
}
and used in secondary as follows:
printf("Looking for data\n");
struct cdev_key key = { 0 };
int ret = rte_hash_lookup_data (h,&key,&data);
with DPDK version 19.02, I am able to run 2 separate binaries without issues.
[EDIT-1] based on the update in the ticket, I am able to lookup hash entry added from primary in the secondary process.
Priamry log:
rte_hash_count 1 ret:val 0x0:0x0
Secondary log:
0x17fd61380 rte_hash_count 1
rte_hash_count 1 key:val 0:0
note: if using rte_hash_lookup please remember to disable Linux ASLR via echo 0 | tee /proc/sys/kernel/randomize_va_space.
Binary 1: modified example/skeleton to create hash test
CMD-1: ./build/basicfwd -l 5 -w 0000:08:00.1 --vdev=net_tap0 --socket-limit=2048,1 --file-prefix=test
Binary 2: modified helloworld to lookup for hash test, else assert
CMD-2: for i in {1..20000}; do du -kh /var/run/dpdk/; ./build/helloworld -l 6 --proc-type=secondary --log-level=3 --file-prefix=test; done
Changing or removing the file-prefix results in assert logic to be hit.
note: DPDK 19.02 has the inherent bug which does not cleanup the /var/run/dpdk/; hence recommends to use 19.11.2 LTS
Code-1:
struct rte_hash_parameters test = {0};
test.name = "test";
test.entries = 32;
test.key_len = sizeof(uint64_t);
test.hash_func = rte_jhash;
test.hash_func_init_val = 0;
test.socket_id = 0;
struct rte_hash *testptr = rte_hash_create(&test);
if (testptr == NULL) {
rte_panic("Failed to create test hash, errno = %d\n", rte_errno);
}
Code-2:
assert(rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test"));
printf("hello from core %u::%p\n", lcore_id, rte_hash_find_existing("test1"));
As mentioned in DPDK Programmers Guide, using multiprocessor functionalities come with some restrictions. One of them is that the pointer to a function can not be shared between processes. As a result the hashing function is not available on the secondary process. The suggested work around is to the hashing part in the primary process and the secondary process accessing the hash table using the hash value instead of the key.
From DPDK Guide:
To work around this issue, it is recommended that multi-process applications perform the hash calculations by directly calling the hashing function from the code and then using the rte_hash_add_with_hash()/rte_hash_lookup_with_hash() functions instead of the functions which do the hashing internally, such as rte_hash_add()/rte_hash_lookup().
Please refer to the guide for more information [36.3. Multi-process Limitations]
link: https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html
In the time of writing this answer the guide is for DPDK 20.08.
When I try to match a message in a receive statement I get a "bad node type 44" error message. This happens when the message's type is a typedef. The error message is rather cryptic and doesn't give much insight.
typedef t {
int i
}
init {
chan c = [1] of {t}
t x;
!(c ?? [eval(x)]) // <--- ERROR
}
Note: This may, or may not, be a bug in Spin: apparently, the grammar allows using a structure variable as an argument for eval(), but it does not look like this situation is handled correctly (within the extent of my limited understanding). I would encourage you to contact the maintainers of Promela/Spin and submit your model.
Nevertheless, there is a work-around for the issue you reported (see below).
Contrary to what is reported here:
NAME
eval - predefined unary function to turn an expression into a constant.
SYNTAX
eval( any_expr )
The actual promela grammar for EVAL looks a bit different:
receive : varref '?' recv_args /* normal receive */
| varref '?' '?' recv_args /* random receive */
| varref '?' '<' recv_args '>' /* poll with side-effect */
| varref '?' '?' '<' recv_args '>' /* ditto */
recv_args: recv_arg [ ',' recv_arg ] * | recv_arg '(' recv_args ')'
recv_arg : varref | EVAL '(' varref ')' | [ '-' ] const
varref : name [ '[' any_expr ']' ] [ '.' varref ]
Take-aways:
apparently, eval is allowed to take as argument a structure (because name may be the identifier of a typedef structure [?])
eval can also take as argument a structure field
when one aims to apply message filtering to an entire structure, it can expand the relevant fields of the structure itself
Example:
typedef Message {
int _filter;
int _value;
}
chan inout = [10] of { Message }
active proctype Producer()
{
Message msg;
byte cc = 0;
for (cc: 1 .. 10) {
int id;
select(id: 0..1);
msg._filter = id;
msg._value = cc;
atomic {
printf("Sending: [%d|%d]\n", msg._filter, msg._value);
inout!msg;
}
}
printf("Sender Stops.\n");
}
active proctype Consumer()
{
Message msg;
msg._filter = 0;
bool ignored;
do
:: atomic {
inout??[eval(msg._filter)] ->
inout??eval(msg._filter), msg._value;
printf("Received: [%d|%d]\n", msg._filter, msg._value);
}
:: timeout -> break;
od;
printf("Consumer Stops.\n");
}
simulation output:
~$ spin test.pml
Sending: [1|1]
Sending: [0|2]
Received: [0|2]
Sending: [0|3]
Received: [0|3]
Sending: [0|4]
Received: [0|4]
Sending: [0|5]
Received: [0|5]
Sending: [1|6]
Sending: [0|7]
Received: [0|7]
Sending: [0|8]
Received: [0|8]
Sending: [1|9]
Sending: [1|10]
Sender Stops.
timeout
Consumer Stops.
2 processes created
Generating a verifier does not result in a compilation error:
~$ spin -a test.pml
~$ gcc -o run pan.c
Note: when using both message filtering and message polling (like in your model sample), the fields of the structure that are subject to message filtering should be placed at the beginning of it.
Apparently it's a bug, link to github issue: https://github.com/nimble-code/Spin/issues/17
Update: Bug is now fixed.
Update 2: Bug was actually partially fixed, there are still some edge cases where it's behaving weirdly.
Update 3: As far as I can tell bug looks fixed now. The only downside is that it seems that now there is a strict restriction on what you put in the receive args. They have to match exactly the types declared in the channel. No more partial matches or unrolling struct fields.
My guess is that this error is related to the restrictions that structured types have. One restriction is that they can't be handled as a unit, to assign or compare them one must do it one field at a time.
For example, if one writes: x == y, where x and y are variables of a typedef type, the following error is shown: Error: incomplete structure ref 'x' saw 'operator: =='
Under the hood, when trying to compare the channel's queue to match the eval something is triggered that indicates that the comparison can't be done and then raises the "bad node type" message.
I'd like to look at the results of retrieving a single document from a MongoDB using the C++ 3.0 driver. The driver documentation describes the view() method of the bsoncxx::document::value class (which is returned by mongocxx::collection::find_one). When I attempt to use it like this:
#include <bsoncxx/document/view.hpp>
#include <bsoncxx/document/value.hpp>
#include <mongocxx/instance>
#include <mongocxx/client>
mongocxx::instance inst{};
mongocxx::client conn{};
bsoncxx::document::view doc;
auto db = conn["test"];
try {
auto docObj = db["collection"].find_one(document{} <<
"field" << "value" << finalize);
doc = docObj.view();
} catch (mongocxx::exception::query e) {
std::cerr << "Couldn't retrieve document";
return NULL;
}
...
I get the following compilation error:
error: 'struct core::v1::optional<bsoncxx::v0::document::value>' has no member named 'view'
at the line
doc = docObj.view();
What am I doing wrong? If this is not the the correct idiom for using find_one(), what should I be using instead?
Found it. The bsoncxx::optional template means that the members of bsoncxx::document::value are available as var->member. The above code should have read:
doc = docObj->view();
It was confusing because docObj is an object, not a pointer, but an object that presents its underlying object as though it were a pointer.
core::v1::optional<T> acts much like std::experimental::optional<T>.
And as described in documentation for std::experimental::optional (or, since c++17, std::optional),
When an object of type optional is contextually converted to bool, the conversion returns true if the object contains a value and false if it does not contain a value.
you have to check your docObj for containing a value by applying operator bool to it because
The behavior [of operator*] is undefined if *this does not contain a value
(there is described some bad_optional_access exception, but documentation for operator* says that trying to access contained value when there is no value leads to UB)
So, your code has to look like
if(docObj) {
doc docObj->view();
} else {
//Throw an exception? log an error to console?
//Do nothing?
std::cerr << "find_one() failed for" << std::endl <<
bsoncxx::to_json(
document{} << "field" << "value" << finalize
) << std::endl;
}
This may help if find_one() failed for some reason.
Yes, implementations of core::v1::optional<T> and std::optional may differ (at least I can't find it at official api documentation).
But it's better to check.
UPD: As shown in file for stdx::optional, I'm (partially?) right: it can use std::experimental::optional
I'm using Eclipse 4.2, with CDT, and MinGW toolchain on a Windows machine (although I've a feeling the problem has nothing to do with this specific configuration). The G++ compiler is 4.7
I'm playing with c++11 features, with the following code:
#include <iostream>
#include <iomanip>
#include <memory>
#include <vector>
#include <list>
#include <algorithm>
using namespace std;
int main( int argc, char* argv[] )
{
vector<int> v { 1, 2, 3, 4, 5, 6, 7 };
int x {5};
auto mark = remove_if( v.begin(), v.end(), [x](int n) { return n<x; } );
v.erase( mark, v.end() );
for( int x : v ) { cout << x << ", "; }
cout << endl;
}
Everything is very straight forward and idiomatic c++11. The code compiles with no problems on the command line (g++ -std=c++11 hello.cpp).
In order to make this code compile In eclipse, I set the compiler to support C++11:
Properties -> C/C++ Build -> Settings -> Miscellaneous -> Ohter Flags:
I'm adding -std=c++11
Properties -> C/C++Build -> Discovery Options -> Compiler invocation arguments:
Adding -std=c++11
That's the only change I did to either the global preferences or to the project properties.
First Question: Why do I've to change the flags in two places? When each compiler flags is used?
If I hit Ctrl-B, the project will build successfully, as expected, and running it from within eclipse show the expected result (It prints: '5, 6, 7,').
However, the editor view shows red marks of error on both the 'remove_if' line, and the 'v.erase' line. Similarly, the Problems view shows I've these two problems. Looking at the details of the problem, I get:
For the remove_if line: 'Invalid arguments. Candidates are: #0 remove_if(#0, #0, #1)
For the erase line: 'Invalid arguments Candidates are: '? erase(?), ? erase(?,?)'
Second questions: It appears there are two different builds: one for continues status, and one for the actual build. Is that right? If so, do they have different rule (compilation flags, include paths, etc.)?
Third question: In the problem details I also see: 'Name resolution problem found by the indexer'. I guess this is why the error message are so cryptic. Are those messages coming from MinGW g++ compiler or from Eclipse? What is this Name resolution? How do I fix?
Appreciate your help.
EDIT (in reply to #Eugene): Thank you Eugene. I've opened a bug on Eclipse. I think that C++11 is only partially to blame. I've cleaned my code from C++11 stuff, and removed the -std=c++11 flag from both compilation switch. And yet, the CodAn barks on the remove_if line:
int pred( int n ) { return n < 5; }
int main( int argc, char* argv[] )
{
vector<int> v;
for( int i=0; i<=7; ++i ) {
v.push_back( i );
}
vector<int>::iterator mark = remove_if( v.begin(), v.end(), pred );
v.erase( mark, v.end() );
for( vector<int>::iterator i = v.begin(); i != v.end(); ++i ) {
cout << *i << ", ";
}
cout << endl;
}
The code compiles just fine (with Ctrl-B), but CodAn doesn't like the remove_if line, saying: Invalid Arguments, Candidates are '#0 remove_if(#0,#0,#1)'.
This is a very cryptic message - it appears it misses to substitute arguments in format string (#0 for 'iterator' and #1 for 'predicate'). I'm going to update the bug.
Interestingly, using 'list' instead of 'vector' clears up the error.
However, as for my question, I'm curious about how the CodAn work. Does it uses g++ (with a customized set of flags), or another external tool (lint?), or does it do it internally in Java? If there is a tool, how can I get its command line argument, and its output?
Build/Settings - these flags will be included into your makefile to do actual build. Build/Discovery - these flags will be passed to a compiler when "scanner settings" are discovered by IDE. IDE will run compiler in a special mode to discover values of the predefined macros, include paths, etc.
I believe, the problems you are seeing are detected by "Codan". Codan is a static analysis built into the CDT editor, you may find its settings on "C/C++ General"/"Code Analysis". You should report the problem to the bugs.eclipse.org if you feel the errors shown are bogus. Note that CDT does not yet support all C++11 features.