How can I configure my network to associate one public IP address per Namespace in my Ubuntu 14.04 server and launch it in command line?
I want to do that:
configuration:
nameSpace1 = publicIP1
nameSpace2 = publicIP2
nameSpace3 = publicIP3
Terminal:
nameSpace1 ffmpeg etc...
nameSpace2 ffmpeg etc...
nameSpace2 youtube-dl etc...
nameSpace2 streamlink etc...
nameSpace3 ffmpeg etc...
Thanks a lot to help me :)
Related
I'm trying to use ParcelJS with Lando and there's one problem if you want HMR to work. You need to expose a port and that seems to be much harder than it should be with Lando. :(
So I know I need to do this for my ParcelJS watch command:
parcel watch dev/scripts.js --out-dir prod/ --hmr-port 6101
Then I need to expose the port I've assigned, in this case "6101" to Docker (via my Lando config file). But that's where it's tricky, apparently, because of the proxy setup Lando uses.
My current .lando.yml config is below, but it doesn't work as expected and the port is not exposed. I still get a "scripts.js:224 WebSocket connection to 'wss://testwp.lndo.site:6101/' failed:" error message from my ParcelJS generated script file in my browser's dev tools:
name: testwp
recipe: wordpress
config:
php: '8.0'
via: nginx
webroot: wordpress
database: mysql:8.0
services:
appserver:
portforward: 6101
I saw a similar post about a problem with LocalWP which does about the same thing Lando does.
Can you maybe try to add the flag --hmr-hostname localhost.
Its ether that or --hmr-hostname testwp.lndo.site.
UPDATE:
After checking the parcel CLI docs the flag could also be --hmr-host localhost try that aswell.
I'm trying to set up a local environment for microservices using minikube. My cluster consists of 4 pods and the minikube ip for all 4 of them are the same. However, each service runs on a unique nodeport.
EG: 172.42.159.12:12345 & 172.42.159.12:23456
Ingress generates them as
http://172.42.159.12:12345
http://172.42.159.12:23456
http://172.42.159.12:34567
http://172.42.159.12:45678
They all work fine when using the ip to access them, and they work fine when using a loadbalancer and deploying a cloud environment.
But I want this to work on my minikube, and I can't use the ../etc/hosts to assign domain names for each service ecause it does not accept the nodeports being passed in.
Any help on this is really appreciated.
so I found a solution for this.
The only way to do it is with a third-party app called Fiddler.
How To:
Download And Run Fiddler
Open Fiddler => Rules => Customize Rules
Scroll down to find static function OnBeforeRequest(oSession: Session)
Pass in
if (oSession.HostnameIs("your-domain.com")){
oSession.bypassGateway = true;
oSession["x-overrideHost"] = "minikube_ip:your_port";
}
Save File
My question is related to a question I asked earlier.
Forward packets between SR-IOV Virtual Function (VF) NICs
Basically what I want to do is use 4 SR-IOV functions of Intel 82599ES and direct traffic between VFs as I need. The setup is something like this (don't mind the X710, I use 82599ES now)
For the sake of simplicity at testing I'm only using one VM running warp17 to generate traffic, send it though VF1 and receive it back from VF3. Since the new dpdk versions have a switching function as described in https://doc.dpdk.org/guides-18.11/prog_guide/switch_representation.html?highlight=switch
, I'm trying to use 'testpmd' to configure switching. But it seems to be test pmd doesn't work with any flow commands I enter. All I get is "Bad argument". For example it doesn't work with this command,
flow create 1 ingress pattern / end actions port_id id 3 / end
My procedure is like this,
Bind my PF(82599ES) with igb_uio driver
Create 4 VFs using following command,
echo "4" | sudo tee /sys/bus/pci/devices/0000:65:00.0/max_vfs
Bind 2 VFs to vfio_pci driver using,
echo "8086 10ed" | sudo tee /sys/bus/pci/drivers/vfio-pci/new_id
sudo ./usertools/dpdk-devbind.py -b vfio-pci 0000:65:10.0 0000:65:10.2
Use PCI passthough to bind VFs to VM and start the VM
sudo qemu-system-x86_64 -enable-kvm -cpu host -smp 4 -hda WARP17-disk1.qcow2 -m 6144 \
-display vnc=:0 -redir tcp:2222::22
-net nic,model=e1000 -net user,name=mynet0
-device pci-assign,romfile=,host=0000:65:10.0
-device pci-assign,romfile=,host=0000:65:10.2
Run testpmd with PF and 2 port representators of VFs
sudo ./testpmd --lcores 1,2 -n 4 -w 65:00.0,representor=0-1 --socket-mem 1024 --socket-mem 1024--proc-type auto --file-prefix testpmd-pf -- -i --port-topology=chained
Am I doing something wrong or is this the nature of testpmd?
My dpdk version is 18.11.9
please note 82599ES uses ixgbe and X710 uses i40e PMD. Both are different and have different properties. As per the documentation comparing ixgbe PMD (http://doc.dpdk.org/guides/nics/ixgbe.html) and i40e PMD (http://doc.dpdk.org/guides/nics/i40e.html) the Flow director functionality that is for the ingress packets (packets received from the external port into ASIC). The function Floating VEB is the feature that you need to use. But this is only present in X710 and not in 82599ES.
To enable VEB one needs to use -w 84:00.0,enable_floating_veb=1 in X710. But this limits your functionality that you will not able to receive and send on physical port.
the best option is to use 2 * 10Gbps, where dpdk-0 is used wrap7/pktgen/trex and dpdk-1 is used by vm-1/vm-2/vm-3. the easiest parameter is to control DST MAC address matching to VF.
setup:
create necessary vf for port-0 and port-1
share the VF to relevant VM.
bind dpdk vf ports to igb_uio.
from traffic generator port-0 in relevant mac address of VF.
[P.S.] this is the information we have discussed over skype.
I running the Ubuntu 18.04 cloud image and trying to configure networking through cloud-init. For some reason it is ignoring my networking when I try to assign a static IP and just falls back to using DHCP. I'm not sure why and I'm not sure how to debug it. Does anyone know if I am doing something wrong or how I should further troubleshoot this:
Here is my config.yaml I'm using to generate my config.img:
#cloud-config
network:
version: 2
ethernets:
ens2:
dhcp4: false
dhcp6: false
addresses: [10.0.0.40/24]
gateway4: 10.0.0.1
password: secret # for the 'ubuntu' user in case we can't SSH in
chpasswd: { expire: false }
ssh_pwauth: true
users:
- default
- name: brennan
ssh_import_id: gh:brennancheung
sudo: ALL=(ALL) NOPASSWD:ALL
hostname: vm
runcmd:
- [ sh, -xc, "echo Here is the network config for your instance" ]
- [ ip, a ]
final_message: "Cloud init is done. Woohoo!"
Everything else in the config seems to be working, it's as if it doesn't even see the network portion though.
I'm attaching the .img as a cdrom to read the cloud-init. You can see how I'm running it here: https://github.com/brennancheung/playbooks/blob/master/cloud-init-lab/Makefile
NOTE: Once I'm logged into the box I can replace the config in /etc/netplan with the network section above and re-apply it and the networking comes up fine with a static IP. So I think there aren't any obvious errors that I am missing. This leads me to believe it is related to the cloud-init networking module(s) and not netplan itself.
I finally figure it out. Hopefully this helps someone else.
Apparently you can't supply networking configuration in user-data. You have to specify it in the cloud provider's data source or in metadata. In order to do that you have to move the network section into its own file and build the cloud-init image with the --network-config=... option.
Ex:
cloud-localds -v --network-config=network-config-v2.yaml seed.img user-data.yaml
I have the complete setup for configuring and booting a cloud instance in a local KVM if it helps anyone else out.
https://github.com/brennancheung/playbooks/tree/master/cloud-init-lab
If you notice, in /etc/cloud/cloud.cfg.d there exists a file called 99-fake-cloud.cfg (or something similar). If you delete this, then cloud-init will configure the network using the parameters in your user-data file (i.e. - /etc/cloud/cloud.cfg)
Just realized that geoip was present by default within the nginx-ingress in the context of kubernetes; that is, looked around, being new into nginx geoip, I don't have much clue about how to benefit from this
Firstly, is there any declarative setup to effectively have it working ? A configmap setup, or so ?
Secondly, how such info is passed from the nginx-ingress to an app ? Is the info present in the headers ? is there any extra setup to apply ?
thanks a lot for any experienced input; best
Find usefull documentation about how to configure Geoip2 for nginx ingress kubernetes deployment.
Example Nginx Configuration ConfigMap
You will find the expected ConfigMap name at the nginx controller container entrypoint or environment variables. Furthermore you can override this name, the way to do so will depend on your nginx installation/deployment method.
ConfiMap Nginx supported configurations
You will find there a listed all the supported configs/properties plus a sort description about them and how to use them.
For this specific question, the property to configure Geoip2 is "use-geoip2" (link below)
Enable GeoIP2
remark: you will need a license and add a flag at nginx entry command providing it
The nginx_http_geoip_module module creates variables with values depending on the client IP address, using the precompiled MaxMind databases.
This module is not built by default, it should be enabled with the --with-http_geoip_module configuration parameter.
The module analyze headers, next connect to defined database, fetch the localization information and offers a variables regarding to them like
country or city of connection origin. Some examples:
$geoip_country_code - two-letter country code
$geoip_city - city name
$geoip_postal_code - postal code