DPDK Switch Representation testpmd flow commands not working - virtualization

My question is related to a question I asked earlier.
Forward packets between SR-IOV Virtual Function (VF) NICs
Basically what I want to do is use 4 SR-IOV functions of Intel 82599ES and direct traffic between VFs as I need. The setup is something like this (don't mind the X710, I use 82599ES now)
For the sake of simplicity at testing I'm only using one VM running warp17 to generate traffic, send it though VF1 and receive it back from VF3. Since the new dpdk versions have a switching function as described in https://doc.dpdk.org/guides-18.11/prog_guide/switch_representation.html?highlight=switch
, I'm trying to use 'testpmd' to configure switching. But it seems to be test pmd doesn't work with any flow commands I enter. All I get is "Bad argument". For example it doesn't work with this command,
flow create 1 ingress pattern / end actions port_id id 3 / end
My procedure is like this,
Bind my PF(82599ES) with igb_uio driver
Create 4 VFs using following command,
echo "4" | sudo tee /sys/bus/pci/devices/0000:65:00.0/max_vfs
Bind 2 VFs to vfio_pci driver using,
echo "8086 10ed" | sudo tee /sys/bus/pci/drivers/vfio-pci/new_id
sudo ./usertools/dpdk-devbind.py -b vfio-pci 0000:65:10.0 0000:65:10.2
Use PCI passthough to bind VFs to VM and start the VM
sudo qemu-system-x86_64 -enable-kvm -cpu host -smp 4 -hda WARP17-disk1.qcow2 -m 6144 \
-display vnc=:0 -redir tcp:2222::22
-net nic,model=e1000 -net user,name=mynet0
-device pci-assign,romfile=,host=0000:65:10.0
-device pci-assign,romfile=,host=0000:65:10.2
Run testpmd with PF and 2 port representators of VFs
sudo ./testpmd --lcores 1,2 -n 4 -w 65:00.0,representor=0-1 --socket-mem 1024 --socket-mem 1024--proc-type auto --file-prefix testpmd-pf -- -i --port-topology=chained
Am I doing something wrong or is this the nature of testpmd?
My dpdk version is 18.11.9

please note 82599ES uses ixgbe and X710 uses i40e PMD. Both are different and have different properties. As per the documentation comparing ixgbe PMD (http://doc.dpdk.org/guides/nics/ixgbe.html) and i40e PMD (http://doc.dpdk.org/guides/nics/i40e.html) the Flow director functionality that is for the ingress packets (packets received from the external port into ASIC). The function Floating VEB is the feature that you need to use. But this is only present in X710 and not in 82599ES.
To enable VEB one needs to use -w 84:00.0,enable_floating_veb=1 in X710. But this limits your functionality that you will not able to receive and send on physical port.
the best option is to use 2 * 10Gbps, where dpdk-0 is used wrap7/pktgen/trex and dpdk-1 is used by vm-1/vm-2/vm-3. the easiest parameter is to control DST MAC address matching to VF.
setup:
create necessary vf for port-0 and port-1
share the VF to relevant VM.
bind dpdk vf ports to igb_uio.
from traffic generator port-0 in relevant mac address of VF.
[P.S.] this is the information we have discussed over skype.

Related

How to configure the rate filter in snort for windows environment?

I have installed and configured the snort software for windows environment. As per their documentation, the threshold is deprecated and I have to use other filters. I need to use rate_filter in my application, however, I don't know how to set it inside my snort software.
I have read all the documentation and internet resources, and I have added the example codes of rate_filter directly to my snort.conf file, but still I can't get what I want.
Am I missing something?
You may need to share your filter to best help here. An example of layout here:
Example 1 - allow a maximum of 100 connection attempts per second from any one IP address, and block further connection attempts from that IP address for 10 seconds:
rate_filter \
gen_id 135, sig_id 1, \
track by_src, \
count 100, seconds 1, \
new_action drop, timeout 10

distribute simulink desktop realtime model

Recently i was try to develop some simple SIMULINK model which receive UDP packet, make some calculation and return answer via other UDP port. Model work just fine, i was able to compile to EXE - no problem.
My goal was that model to work in real time - mean 1 second in simulation to be equal to 1 second in PC. So after research i discover that block:
Real Time Sync
which do the trick - now my simulation is work exactly as I want. Next when I try to build project - after make all changes in settings according documentation (mainly change target to sldrt.tlc) - at end of compile process i've got this:
### Created Simulink Desktop Real-Time module udpTest.rxw64
C:/PROGRA~1/MATLAB/R2017b/toolbox/sldrt/clang/win64/llvm-link-bca \
-Bstatic \
-o udpTest.bc \
udpTest.obj rtGetInf.obj rtGetNaN.obj rt_nonfinite.obj udpTest_data.obj udpTest_tgtconn.obj sldrt_main.obj rt_sim.obj ext_svr.obj updown_sldrt.obj \
\
\
C:/PROGRA~1/MATLAB/R2017b/toolbox/sldrt/lib/win64/imports.obj \
C:/PROGRA~1/MATLAB/R2017b/toolbox/sldrt/lib/win64/sldrtlib.lib
C:/PROGRA~1/MATLAB/R2017b/toolbox/sldrt/clang/win64/llc -mtriple=x86_64-pc-win32 -O3 -O3 -filetype=obj -o ../udpTest.rxw64 udpTest.bc
Build process completed successfully
As far as I understand I can load that rxw64 file in simulink in external mode and control it - all that is ok, I've done it. But is it possible to distribute that to dedicated PC?
PS: Sorry for long description, but I'm feel really confused and i want to give all details
Case closed. The answer is that I can't distribute my model as separate application. I must set up a target PC which must be dedicated to run binary equivalent of my model. Now - going forward to search a suitable DOS-like boot setup, and maybe try in some kind of virtual PC

How to detect a couple of pings transmitted from a virtual machine to another by using Snort, which is integrated in AlienVault?

For the record: I did the following instruction (found them on a website)
I enabled snort sensors (snort_syslog and snortunified).
In alienvault: ~# nano /etc/snort/rules/local.rules
I did the following rule
alert icmp 192.168.1.130 192.168.1.120 -> any any
(msg:"blablabla"; sid:1000004)
Save and exit
After that I did:
alienvault:~# perl /usr/share/ossim/scripts/create_sidmap.pl /etc/snort/rules/
alienvault:~# /etc/init.d/ossim-server restart
For some reasons nothing happens in AlienVault interface in SIEM when I ping 192.168.1.120 from 192.168.1.130.
Any ideas??
I don't know wether it is still relevant but in my opinion there is a mistake in your Snort rule:
The rule in Snort cannot consist of two IP-adresses in the first part of the rule header.
At the point where you declared the IP '192.168.1.120' you have to declare a port.
The solution you need looks like the following (if i get you right):
alert icmp 192.168.120 any -> 192.168.1.130 any (msg:"blablabla"; sid:1000004)
And also the other way:
alert icmp 192.168.1.130 any -> 192.168.1.120 any (msg:"blablabla"; sid:1000005)
For writing rules in the correct syntax take a look at the manual of snort: http://manual.snort.org/node29.html#SECTION00423000000000000000
I hope that this can help you.
/Chris

How do you access a MongoDB database from two Openshift apps?

I want to be able to access my MongoDB database from 2 Openshift apps- one app is an interactive database maintenance app via the browser, the other is the principle web application which runs on mobile devices via an Openshift app. As I see it in Openshift, MongoDB gets set up within a particular app's folder space, not independent of that space.
What would be the method to accomplish this multiple app access to the database ?
It's not ideal but is my only choice to merge the functionality of both Openshift apps into one ? That's tastes like a bad plate of spaghetti.
2018 update: this applies to Openshift 2. Version 3 is very different, and however the general rules of linux and scaling apply, the details got obsolete.
Although #MartinB answer was timely and correct, it's just a link, so let me put the essentials here.
Assuming that setting up a non-shared DB is already done, you need to find it's host and port. You can ssh to your app (the one with the DB) or use the rhc:
rhc ssh -a appwithdb
env | grep MONGODB
env brings all the environment variables, and grep filters them to show only Mongo-related ones. You should see something like:
OPENSHIFT_MONGODB_DB_HOST=xxxxx-yyyyy.apps.osecloud.com
OPENSHIFT_MONGODB_DB_PORT=zzzzz
xxxxx is the ID of the gear that Mongo sits on
yyyyy is your domain/namespace
zzzzz is MongoDB port
Now, you can use these to create a connection to the DB from anywhere in your Openshift environment. Another application has to use the xxxxx-yyyyy:zzzzz URL. You can store them in custom variables to make maintenance easier.
$ rhc env-set \
MYOWN_DB_HOST=xxxxx-yyyyy \
MYOWN_DB_PORT=zzzzz \
MYOWN_DB_PASSWORD=****** \
MYOWN_DB_USERNAME=admin..... \
MYOWN_DB_NAME=dbname...
And then use the environment variables instead of the standard ones. Just remember they don't get updated automatically when the DB moves away.
Please read the following article from the open shift blog: https://blog.openshift.com/sharing-database-across-applications/

Configuring FQDN for GCE instance on startup

I am trying to start a google compute engine (GCE) instance with a pre-configured FQDN. We are intending to run an application that is licensed based on the contents of /etc/hosts.
I am starting the instances using the Google Cloud SDK utility - gcloud.
I have tried setting the "hostname" key using the metadata option like so:
gcloud compute instances create mynode (standard opts) --metadata hostname=mynode.example.com
Whenever I log into the developer console, under computer, instances, I can see hostname under "Custom metadata". This appears to be a new, custome key - it has no impact on what:
http://metadata.google.internal/computeMetadata/v1/instance/hostname
returns.
I have also tried setting "instance/hostname" like the below, which causes a parsing error when using gcloud.
--metadata instance/hostname=mynode.example.com
I have successfully used the startup scripts functionality of the metadata server to run a startup script that parses the new, internal IP address of the newly created instance, updated /etc/hosts. This appears to work but doesn't feel "like the google way".
Can I configure the FQDN (specifically, a domain name, as the instance name is always the hostname) of an instance, during instance creation, using the metaserver functionality?
try this:
Go to your GCE >> VM instances panel.
stop your gce instance.
clic on the instance name.
Edit your instance, adding this values on Custom metadata fields:
Key field: hostname / Value field: your.server.hostname
Key field: startup-script / Value field: sudo -s hostnamectl set-hostname your.server.hostname
setup-example-image.png
Finally, start your instance and test with a hostnamectl command.
regards!
According to this article 'hostname' is part of the default metadata entries that provide information about your instance and it is NOT possible to manually edit any of the default metadata pairs. You can also take a look at this video from the Google Team. Within the first few minutes it is mentioned that you cannot modify default metadata pairs. As such, it does not seem like you can specify the hostname upon instance creation other than through the use of a start-up script like you've done already. It is also worth mentioning that the hostname you've specified will get deleted and auto-synced by the metadata server upon reboot unless you're using a start-up script or something that would modify it every time.
If what you're currently doing works for what you're trying to accomplish, it might be the only workaround to your scenario.
Here is a patch for /usr/share/google/set-hostname to set FQDN to GCE instance.
https://gist.github.com/yuki-takeichi/3080521322f0f1d159ea6a343e2323e6
Before you use this patch, you must set your desired FQDN in your instance's metadata by specifying hostname key.
Hostname is set each time instance's IP address is renewed by dhclient. set-hostname is just a hook script which dhclient executes and serves new IP address and internal hostame to, and modifies /etc/hosts. This patch changes the source of hostname by querying instance's metadata from metadata server.
The original set-hostname script is here:
https://github.com/GoogleCloudPlatform/compute-image-packages/blob/master/google_config/bin/set_hostname.
Use this patch at your own risk.
When creating a VM, you can specify a custom FQDN hostname as an optional parameter. This feature is currently in Beta.
$ gcloud beta compute instances create INSTANCE_NAME --hostname example.hostname
This should work across OSes, and eliminate the need for workaround scripts.
More info in the docs.
-- Sirui (Product Manager, Google Compute Engine)
I've looked throughout this site to find answered questions and found a few things that work but with a couple solutions combined. This thread seems the place to answer.
1) echo example.com > /etc/hostname
2) add -- 127.0.1.1 example.com in /etc/hosts
3) add -- hostnamectl set-hostname
example.com -- command to /etc/rc.local script
4) uncomment /etc/dhcp/dhclient.conf line:
supersede domain-name "example.com";
5) profit.... Seems to stick after each reboot
(Note example.com is your domain name: fqdndomain.com - yourfqdndomain.org)
Also note this is for Ubuntu or Debian. Other Unix May slightly vary. I've tested this on Ubuntu 16.04
Always on the wording NOT possible to manually edit any of the default metadata pairs, how about the instant level default metadata "/scheduling"? we could set them manually as mentioned in this article