Running p4 on OVS - bpf

I am trying to run P4 on OVS following this github repo https://github.com/Orange-OpenSource/p4rt-ovs. I could setup OvS with DPDK as well p4c to compile p4 to C and then C to ubpf code. However, I am missing the command to load the BPF program into my OvS
ovs-ofctl load-bpf-prog br0 1 /tmp/tunneling.o
However, this option seems to be missing from my OvS. ( Error - unknown option load-bpf-prog). I am using the same version of OvS as the repo. Any suggestions/hints as to what I could do to install this utility (load-bpf-prog)? Thank you for help

Related

Docker nuget connection timeout

Trying to utilize official jetbrains\teamcity-agent image on Kubernetes. I've managed to run Docker in Docker there but trying to build an ASP.NET Core image with docker build command failes on dotnet restore with
The HTTP request to 'GET https://api.nuget.org/v3/index.json' has timed out after 100000ms.
When I connect to the pod itself and try curling the URL it's super fast. So I assume network is not an issue. Thank for any advice.
Update
Trying to run a simple dotnet restore step from container worked. But not from inside the docker build.
Update 2
I've isolated the problem, it has nothing to do with nuget nor TeamCity. Is network related on the Kubernetes host.
Running simple docker build with this Dockerfile:
FROM praqma/network-multitool AS build
RUN route
RUN ping -c 4 google.com
produces output:
Step 1/3 : FROM praqma/network-multitool AS build
---> 3619cb81e582
Step 2/3 : RUN route
---> Running in 80bda13a9860
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 eth0
Removing intermediate container 80bda13a9860
---> d79e864eafaf
Step 3/3 : RUN ping -c 4 google.com
---> Running in 76354a92a413
PING google.com (216.58.201.110) 56(84) bytes of data.
--- google.com ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 53ms
Pods orchestrated by Kubernetes can access internet normally. I'm using Calico as network layer.
I fix this issue by passing argument --disable-parallel to restore command which Disables restoring multiple projects in parallel.
RUN dotnet restore --disable-parallel
i have exactly same behaviour:
i have solution with contains several nuget dependencies
it build without any issue on local machine.
it build without any issue on windows build agent
it build without any issue on docker host machine
but then i try to build it in build agent in docker - i have a lot of message such following:
Failed to download package 'System.Threading.4.0.11' from 'https://api.nuget.org/v3-flatcontainer/system.threading/4.0.11/system.threading.4.0.11.nupkg'.
The download of 'https://api.nuget.org/v3-flatcontainer/system.threading/4.0.11/system.threading.4.0.11.nupkg' timed out because no data was received for 60000ms
i can ping and curl page from nuget.org normally from docker container.
so i think this is some special case. i found some info about MTU but i'm not tested it.
UPDATE initial problem may be connect to k8s - my container work inside k8s cluster based on ubuntu 18.04 with flannel ang k8s v1.16
on my local machine (win based) all works without any issue... but it is strange because i have many services that works in this cluster without any problems! (such harbor, graylog, jaeger etc)
UPDATE 2 ok, now i can understand anything.
i try to execute
curl https://api.nuget.org/v3/index.json
and can get file content without any errors
after this i try to run
wget https://api.nuget.org/v3-flatcontainer/system.threading/4.0.11/system.threading.4.0.11.nupkg
and package downloaded successfully
but after i run dotnet restore i still receive errors with timeout
UPDATE 3
i try to reproduce problem not in k8s cluster but in docker locally
i run container
docker run -it -v d:/project/test:/mnt/proj teamcity-agent-core3.1 bash
teamcity-buildagent-core3.1 - my image based on jetbrains/teamcity-agent which contains .net core 3.1 sdk.
and then execute command inside interactive session:
dotnet restore test.sln
with failed with following messages:
Failed to download package 'System.Runtime.InteropServices.4.3.0' from 'https://api.nuget.org/v3-flatcontainer/system.runtime.interopservices/4.3.0/system.runtime.interopservices.4.3.0.nupkg'.
Received an unexpected EOF or 0 bytes from the transport stream.
The download of 'https://api.nuget.org/v3-flatcontainer/system.text.encoding.extensions/4.3.0/system.text.encoding.extensions.4.3.0.nupkg' timed out because no data was received for 60000ms.
Exception of type 'System.TimeoutException' was thrown.
In my case the solution was marked out here
As noted in the comment, "So maybe the issue needs to be fixed by microsoft by changing the default nuget.config inside of mcr.microsoft.com/dotnet/sdk:5.0."
This was my problem. Docker building from sdk:5.0. Solution seems to be doing the job, which is to add a nuget.config file to the root of the solution.
Contents of nuget.config (again, from posts in that issue):
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<config>
<add key='maxHttpRequestsPerSource' value='10' />
</config>
</configuration>
I had a similar issue. The mistake I was doing was not specifying the exact dotnet version on the docker image.
FROM mcr.microsoft.com/dotnet/core/sdk AS build
My project targets dotnet 2.2. What I did not know was this was pulling the latest dotnet SDK 3.1. So when the dotnet restore ran, it was timing out.
So this is what I did.
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build
I had to specify a specific version. I'm not sure if this is relation to your problem but I hope it send you in the right direction. Always be explicit with the image version.
I had similar of #NIMROD MAINA and #Anatoly Kryzhanovsky issue when i was using build in docker container from gitlab-runner (docker).
When i run dotnet restore outside docker container. Everything it's work!
In my case it didn't work when nuget.config was inside the project folder.
I put nuget.config in the solution folder (out of the project folder) and it worked again.
For me it was solution setting docker (Windows) to:
Expose daemon on tcp://localhost:2375 without TLS (true) and
Use Docker Compose V2 (true)
It's temporary solution, but it works.
Check your DNS settings (A record). Try to type nslookup yourfeeddomain. Make sure that IP address is one and resolved.

Oracle VirtualBox VM network not working

I am attempting to set up a VM using VirtualBox. I am hosting on Windows 10 and want to set up a CentOS vm. I have a VM running but have had problems getting network connectivity with it. I have no experience with VirtualBox and it has been a long time since I worked on Linux. Any ideas on what I need to do to correct this? Are there some steps I need to take during the creation of the image?
Image is : CentOS-7-x86_64-Everything-1708.iso
VirtualBox : Version 5.1.28 r117968 (Qt5.6.2)
When I try to ping anything I get " connection the Network is unreachable
The very best thing you should do is running the following command:
ifconfig -a
Then, If you have an interface listed (not just 'lo'), you can do that:
# cd /etc/sysconfig/network-scripts/
# sed -i -e 's#^ONBOOT="no#ONBOOT="yes#' ifcfg-{{network_device}}
replace {{network_device}} for your default network_device (from ifconfig-a command).
Then restart and it should connect.

LIRC irsend: could not connect to socket irsend: No such file or directory

I am trying to configure LIRC to work with my Raspberry 2B and a circuit I build with a transistor and a IR transmitter as explained in this tutorial
After the installation of LIRC, I followed all the steps and I added these two lines in /etc/modules
lirc_dev
lirc_rpi gpio_out_pin=36
Then I typed this in /etc/lirc/hardware.conf
LIRCD_ARGS="--uinput"
LOAD_MODULES=true
DRIVER="default"
DEVICE="/dev/lirc0"
MODULES="lirc_rpi"
LIRCD_CONF=""
LIRCMD_CONF=""
After rebooting, I added the configuration of my Samsung remote (BN59-00516A) to /etc/lirc/lircd.conf
Then I restarted LIRC again but when I run a command to send a IR frequency
irsend SEND_ONCE Samsung_BN59-00865A KEY_POWER
it complains with the following error:
irsend: could not connect to socket
irsend: No such file or directory
I am guessing this is a problem with my device socket, because in the hardware.conf file I set
DEVICE = "/dev/lirc0"
(just because the tutorial states it), but lirc0 file isn't within the folder.
I couldn't find any other question related to this problem and google didn't help me much either. Does anyone have any hint on this?
After googling a lot, I found out an update is needed to have everything working properly. In my case I did:
apt-get update, apt-get upgrade, rpi-update
Also, as pointed out in this other tutorial, depending on the Raspberry firmware, you might need to add this to /boot/config.txt
dtoverlay=lirc-rpi,gpio_in_pin=XX,gpio_out_pin=YY
Substitute X and Y for whatever pins you're using!
I had a similar problem and I solved it with this command:
sudo lircd --device /dev/lirc0
If you set the value of LIRCD_ARGS in /etc/lirc/hardware.conf to "--device /dev/lirc0", it should start lircd appropriately, when /etc/init.d/lirc is started at boot.
you need to run lircd. It will create two files (lircd and lircd.pid) at /var/run/lirc/:
lircd
I got the same error messages. But had all configurations done. The restart of the lirc daemon solved this issue by typing
$ sudo /etc/init.d/lirc restart
I think is useful to say that the gpio_in_pin=XX,gpio_out_pin=YY part of the /etc/modules can be double checked with
dmesg | grep lirc
which results in something like
[ 3.437499] lirc_dev: IR Remote Control driver registered, major 244
[ 5.472916] lirc_rpi: module is from the staging directory, the quality is unknown, you have been warned.
[ 6.621156] lirc_rpi: auto-detected active high receiver on GPIO pin 22
[ 6.622515] lirc_rpi lirc_rpi: lirc_dev: driver lirc_rpi registered at minor = 0
[ 6.622528] lirc_rpi: driver registered!
for /etc/modules containing
lirc_dev
lirc_rpi gpio_in_pin=23 gpio_out_pin=22

Unable to install AUTHBIND on CentOS 6

I tried to install authbind but getting below error,
Can anyone please help me to resolve this error.
There this project here : https://github.com/tootedom/authbind-centos-rpm
You can easely download this file with :
wget https://s3.amazonaws.com/aaronsilber/public/authbind-2.1.1-0.1.x86_64.rpm
and install it with :
rpm -Uvh https://s3.amazonaws.com/aaronsilber/public/authbind-2.1.1-0.1.x86_64.rpm
The previous answer by irrational won't work because that rpm is built against libc2.14 which is only on centos 7 not 6.
rpm -Uvh authbind-2.1.1-0.1.x86_64.rpm
error: Failed dependencies:
libc.so.6(GLIBC_2.14)(64bit) is needed by authbind-2.1.1-0.1.x86_64
I think you have to build the rpm yourself from the instructions at https://github.com/tootedom/authbind-centos-rpm
I'm having some trouble because the spec file appears to have some errors.
UPDATE:
step-by-step instructions:
svn co https://github.com/tootedom/authbind-centos-rpm.git
mkdir /root/rpmbuild
cp -R authbind-centos-rpm.git/trunk/authbind/* /root/rpmbuild/
cd /root/rpmbuild/SOURCES
wget http://ftp.debian.org/debian/pool/main/a/authbind/authbind_2.1.1.tar.gz
mv authbind_2.1.1.tar.gz authbind-2.1.1.tar.gz
cd ../
rpmbuild -v -bb --clean SPECS/authbind.spec
After all that fix up the rpm actually built and is now at:
/root/rpmbuild/RPMS/x86_64/authbind-2.1.1-0.1.x86_64.rpm
You can now install that using rpm -Uvh and have access to authbind like dem debian peeps.
I did this on centos 6.7 minimal os
I have Centos6, and was having trouble getting a version that would work. Perhaps my solution is just cutting the Gordian knot with a sword, but here goes.
I needed to do use authbind in the first place because I was trying to make Tomcat work on port 80. If that's why your messing with authbind, this should be especially helpful.
I also couldn't make Tomcat work with all the various authbind variations. The one thing that I did do that is worth reporting is that you can just get the gnu sources for authbind and build them and run them. They have supported ipv6 since 2012. If you are having trouble wrestling with distribution managers to make authbind work, including problems with glibc, this approach might be useful. As far as I can tell, authbind doesn't do anything that requires a new glibc, so this worked well, and authbind runs on my Centos 6 happily and without problems.
Keep in mind that this is Linux, sources are available, and sometimes it's easier just to rebuild something than try to get it from a distribution source, especially, as here, when the problem in getting established software like authbind to work with an old version of glibc.
So, First get the tarfile. I got it from:
http://ftp.debian.org/debian/pool/main/a/authbind/authbind_2.1.1.tar.gz
create a directory, then "tar xvf", and then do a "make all" and "make install"
However, the solution that worked for me (as I mentioned, I needed authbind in order to make port numbers less than 1024 available to tomcat), was simply to change the iptables, which I did as follows. (You can cut and paste this into a script if you want to save it for future reference.)
# check that rules are not there already
# note: you must be root; if you aren't do a su, or sudo before each line
iptables -L -n -t nat
# Add rules
iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
iptables -t nat -I PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 8443
# Check
iptables -L -n -t nat
# Save
service iptables save
iptables -L -n -t nat
Giving credit where it's due, this is described well (but with some errors that should be obvious) at
https://www.locked.de/how-to-run-tomcat-on-port-80/
Marklan

NED path error while running Veins/Omnet++ simulations on Ubuntu Server

I have setup a Linux server to run veins/omnet++ simulations on it. The main reason I am doing this is to decrease simulation time. The server is running Ubuntu Server 14.04.3, OMNET++ 4.6, SUMO 0.22.0 and VEINS 4a2. After installing OMNET and SUMO, I changed to VEINS root directory and run ./configure and make MODE=release -j 32. This generated an executable veins-4a2 file that I tried to run as:
./veins-4a2 -u Cmdenv -f examples/veins/omnetpp.ini //since omnetpp.ini is under examples folder
But I got the following error:
Loading NED files from /home/simulator/veins-4a2/examples/veins: 1
<!> Error: NED type `RSUExampleScenario' could not be fully resolved, due to a missing base type or interface.
Before running the previous command I did another ssh into machine to run the command ./sumo-launchd.py -vv -c sumo.
My questions is what does the error refer to? And have I missed any steps during my installation/configuration? Am I doing the make step for veins properly?
For future reference: Specifying the NED Path, or have a look at NED source folder from project properties