I have build Bro IDS from source code. It's successfully installed
user#ubuntu:~$ bro -v
bro version 2.4.1
I am running bro in VM. My Ethernet interface in ens33 instead of eth0. After updating node.cfg to my custom interface i.e. ens33 , i am still unable to start bro.
node.cfg
[bro]
type=standalone
host=localhost
interface=ens33
When i start broctl, i see following error logs
Bro 2.4.1
Linux 4.4.0-96-generic
==== No reporter.log
==== stderr.log
fatal error: problem with interface eth0 (eth0: SIOCETHTOOL(ETHTOOL_GET_TS_INFO) ioctl failed: No such device)
==== stdout.log
max memory size (kbytes, -m) unlimited
data seg size (kbytes, -d) unlimited
virtual memory (kbytes, -v) unlimited
core file size (blocks, -c) unlimited
==== .cmdline
-i eth0 -U .status -p broctl -p broctl-live -p standalone -p local -p bro local.bro broctl broctl/standalone broctl/auto
==== .env_vars
PATH=/usr/bin:/usr/share/broctl/scripts:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
BROPATH=/var/spool/bro/installed-scripts-do-not-touch/site::/var/spool/bro/installed-scripts-do-not-touch/auto:/usr/share/bro:/usr/share/bro/policy:/usr/share/bro/site
CLUSTER_NODE=
==== .status
TERMINATED [atexit]
==== No prof.log
==== No packet_filter.log
==== No loaded_scripts.log
What i can understand from logs is that broctl is unable to read the updated node.cfg,because it is using the wrong interface. Now I need to know what else changes do I need to make in order to start bro without crash?
You should edit config file in /etc/bro/node.cfg, and change eth0 to ens33
Related
When I create an iscsi target containing two luns (bdev), these two luns are mapped to two disks. When I use fio to read and write two disks, iscsi target uses a thread (or core) to perform the operation.
Operators:
./scripts/rpc.py bdev_malloc_create -b Malloc0 64 512
./scripts/rpc.py bdev_malloc_create -b Malloc1 64 512
./scripts/rpc.py --verbose DEBUG iscsi_create_portal_group 1 172.20.20.156:3261
./scripts/rpc.py --verbose DEBUG iscsi_create_initiator_group 2 ANY 172.20.20.156/24
./scripts/rpc.py --verbose DEBUG iscsi_create_target_node disk1 "Data Disk1" "Malloc0:0 Malloc1:1" 1:2 64 -d
iscsiadm -m discovery -t sendtargets -p 172.20.20.156:3261
iscsiadm -m node --targetname iqn.2016-06.io.spdk:disk1 --portal 172.20.20.156:3261 --login
fio -ioengine=libaio -bs=512B -direct=1 -thread -numjobs=2 -size=64M -rw=write -filename=/dev/sdd -name="BS 512B read test" -iodepth=2
fio -ioengine=libaio -bs=512B -direct=1 -thread -numjobs=2 -size=64M -rw=write -filename=/dev/sde -name="BS 512B read test" -iodepth=2
enter image description here
The log circled in red above was added by myself. When I read and write to two disks at the same time, the thread does not change.
Can't the read and write operations of these two disks be performed on two different threads?
the read and write operations of these two disks can be performed on two different threads
I'm going mad with a stupid issue I can't solve.
During the testing of my Yocto project I always used connmactl in order to connect my board to the internet.
Now I am going to release the product but before releasing I am working on an “internet connection manager”
I guess I can’t use connmanctl anymore since it consist in an interactive command (isn’t it?) so I’m going to use directly wpa_supplicant.
In my script I edit wpa_supplicant.conf as follow:
root#localhost:~# cat /etc/wpa_supplicant/wpa_supplicant.conf
ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=0
update_config=1
bgscan=""
network={
ssid="Obi_Lan_Kenobi"
psk="TheForceIsStrongWithThisOne"
}
After that I try to start wpa_supplicant with this command:
wpa_supplicant -B -i mlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf wext
as result of this command I get:
Successfully initialized wpa_supplicant
But if I try to ping google.com (or any other website) i seet that the network doesn’t work, In particular I get this message: ping: sendto: Network is unreachable
Everything is working under connmanctl, but not under wpa_supplicant.
The strange thing is that running iw command everything seems to be configurated in the right way:
root#localhost:~# iw dev mlan0 link
Connected to 56:0c:ff:37:1a:69 (on mlan0)
SSID: Obi_Lan_Kenobi
freq: 2412
RX: 32154 bytes (310 packets)
TX: 19436 bytes (128 packets)
signal: -38 dBm
rx bitrate: 1.0 MBit/s
tx bitrate: 72.2 MBit/s MCS 7 short GI
bss flags: short-preamble short-slot-time
dtim period: 2
beacon int: 100
I honestly can’t understand why.
Does anybody have a suggestion about that?
I want to visualise my data base and tried using SchemaSpy, but I can't connect to the data base. It's a mysql data base, running on my computer, accessible under localhost and I called SchemaSpy with the following command:
java -jar schemaspy-6.0.0.jar -t mysql -dp postgresql-42.2.7.jar -db <name> -host <host> -p <port> -s <name> -u <user> -p <password> -o ./output/
I downloaded postgresql-42.2.7.jar as a driver and set the respective parameter in the call. I also tried not setting a driver at all, but it didn't change the output.
I get the following error message after trying to run SchemaSpy:
SchemaSpy generates an HTML representation of a database schema's relationships.
SchemaSpy comes with ABSOLUTELY NO WARRANTY.
SchemaSpy is free software and can be redistributed under the conditions of LGPL version 3 or later.
http://www.gnu.org/licenses/
INFO - Starting Main v6.0.0 on ####### with PID 10774 (/path/to/file started by ##### in /path/to/file)
INFO - The following profiles are active: default
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.springframework.cglib.core.ReflectUtils$1 (jar:file:/path/to/file/schemaspy-6.0.0.jar!/BOOT-INF/lib/spring-core-4.3.13.RELEASE.jar!/) to method java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain)
WARNING: Please consider reporting this to the maintainers of org.springframework.cglib.core.ReflectUtils$1
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
INFO - Found configuration file: schemaspy.properties
INFO - Started Main in 1.397 seconds (JVM running for 1.944)
INFO - Starting schema analysis
WARN - Connection Failure
I also tried setting the --illegal-access=warn parameter, but no changes.
Does anyone know what problem might be?
Thanks a lot!
Our Varnish Instance
/usr/sbin/varnishd -P /var/run/varnish.pid -a :6081 -f /etc/varnish/cm-varnish.vcl -T 127.0.0.1:6082 -t 1h -u varnish -g varnish -S /etc/varnish/secret -s malloc,24G -p shm_reclen 10000 -p http_req_hdr_len 10000 -p thread_pool_add_delay 2 -p thread_pools 8 -p thread_pool_min 500 -p thread_pool_max 4000 -p sess_workspace 1073741824
32G Ram, 16 Core Processor and We allocate 24GB of memory for varnish
Average uptime of our varnish instance remains 3hrs which is very much low. Our Cache TTL is 1Hr and Grace time is 2Hrs. Every 5 min once we generally refresh the cache contents [with more than n hits] through a java process. We track hits of varnish by constanly polling varnishncsa output.
I tried varnishadm panic.show
Last panic at: Thu, 23 May 2013 09:14:42 GMT
Assert error in WSLR(), cache_shmlog.c line 220:
Condition(VSL_END(w->wlp, l) < w->wle) not true.
thread = (cache-worker)
ident = Linux,2.6.18-238.el5,x86_64,-smalloc,-smalloc,-hcritbit,epoll
Backtrace:
0x42dc76: /usr/sbin/varnishd [0x42dc76]
0x432d1f: /usr/sbin/varnishd(WSLR+0x27f) [0x432d1f]
0x42a667: /usr/sbin/varnishd [0x42a667]
0x42a89e: /usr/sbin/varnishd(http_DissectRequest+0xee) [0x42a89e]
0x4187d1: /usr/sbin/varnishd(CNT_Session+0x741) [0x4187d1]
0x42f706: /usr/sbin/varnishd [0x42f706]
0x3009c0673d: /lib64/libpthread.so.0 [0x3009c0673d]
0x30094d40cd: /lib64/libc.so.6(clone+0x6d) [0x30094d40cd]
Any inputs on what do we miss?
My best guess is that you have a very long cookie string (or other custom headers) so that it overflows the http_req_hdr_len. I remember reading something about such a bug that was fixed but afaik not released in a stable version. I'm afraid I don't have better sources than my own memory at hand.
You also have a very high sess_workspace and total number of threads possible. That does less for performance than it does in risking swapping in most setups.
I am again rephrasing the issue that we are facing:
We are creating link aggregations [dlmp groups] with two interfaces named net0 & net5:
# dladm create-aggr -m dlmp -l net0 -l net5 -l net2 aggr1
Setting prob targets for aggr1:
# dladm set-linkprop -p probe-ip=+ aggr1
Setting failure detection time:
# dladm set-linkprop -p probe-fdt=15 aggr1
After this we are adding IP to this aggregation as follows:
# ipadm create-ip aggr1
Assigns an IP to this:
# ipadm create-addr -T static -a x.x.x.x/y aggr1/addr
Then we check the status using dladm and ipadm everything seems up and running.
Then we tested a scenario where we are dettached cables from above n/w interfaces, but what we got is as follows:
# dladm show-aggr -x
LINK PORT SPEED DUPLEX STATE ADDRESS PORTSTATE
traf0 -- 100Mb unknown up 0:10:e0:5b:69:1 --
net0 100Mb unknown down 0:10:e0:5b:69:1 attached
net5 100Mb unknown down a0:36:9f:45:de:9d attached
First issues is that we are getting the state of link "traf0" as up in above command output, secondly in the output of "ipadm":
traf0 ip ok -- --
traf0/addr static ok -- 7.8.0.199/16
We are getting the status of traf0 as ok.
So here I have a query, can't we have any configuration where we could get right status of traf0 both in dladm and ipadm output?
[One more thing to add here is, when we don't assign any IP to this traf0 aggregation then in that case on dettaching the cables we get right output in dladm command.]
Apart from this configuration, we are using these aggregations as vnics in zones. There also we are getting the status of these links up in ipadm command output [after dettaching the cables].
A small update::
We have set the value of "TRACK_INTERFACES_ONLY_WITH_GROUPS" parameter in /etc/default/mpathd as no and getting the state of "traf0" in ipadm command as failed, but still we get traf0/addr as ok.
traf0 ip failed -- --
traf0/addr static ok -- 7.8.0.199/16