Default settings Raspberry Pi /etc/network/interfaces - raspberry-pi

I have changed the settings of /etc/network/interfaces but with this action my internet doesn't work anymore.
Now I want to change it back but I can't find the default settings.
If you have the default setting, can you place them here please?

For my Raspberry Pi 3B model it was
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
allow-hotplug wlan0
iface wlan0 inet manual
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
allow-hotplug wlan1
iface wlan1 inet manual
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

These are the default settings I have for /etc/network/interfaces (including WiFi settings) for my Raspberry Pi 1:
auto lo
iface lo inet loopback
iface eth0 inet dhcp
allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp

Assuming that you have a DHCP server running at your router I would use:
# /etc/network/interfaces
auto lo eth0
iface lo inet loopback
iface eth0 inet dhcp
After changing the file issue (as root):
/etc/init.d/networking restart

Related

Accessing istio-service-mesh from outside

I have installed minikube and docker to run a kubernetes cluster on a virtual machine (Ubuntu 20.04). After that, I installed istio and deployed the sample book application. So far, so good. The problem I have is that i can't access the application from outside the virtual machine (my local computer). I've done everything like the istio tutorial said, but at the point where they say it is accessible now I simply can't reach it.
The istio-ingressgateway receives an IP-address, but it's the same as the cluster IP, therefore my local computer can't find it of course as it only knows the address of the node (VM). And the minikube IP is also different to the VM IP.
istio-ingressgateway:
$ kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-egressgateway ClusterIP 10.110.61.129 <none> 80/TCP,443/TCP,15443/TCP 12m
istio-ingressgateway LoadBalancer 10.106.72.253 10.106.72.253 15021:32690/TCP,80:32155/TCP,443:30156/TCP,31400:31976/TCP,15443:31412/TCP 12m
istiod ClusterIP 10.98.94.248 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 13m
minikube ip:
$ echo $(minikube ip)
192.168.49.2
Node addresses:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:95:be:75 brd ff:ff:ff:ff:ff:ff
inet 192.168.164.131/24 brd 192.168.164.255 scope global dynamic ens33
valid_lft 1069sec preferred_lft 1069sec
inet6 fe80::20c:29ff:fe95:be75/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:e2:d8:bc:fc brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:e2ff:fed8:bcfc/64 scope link
valid_lft forever preferred_lft forever
6: br-1bb350499bef: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:19:18:56:1b brd ff:ff:ff:ff:ff:ff
inet 192.168.49.1/24 brd 192.168.49.255 scope global br-1bb350499bef
valid_lft forever preferred_lft forever
inet6 fe80::42:19ff:fe18:561b/64 scope link
valid_lft forever preferred_lft forever
12: vethec1def3#if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-1bb350499bef state UP group default
link/ether 56:bd:8c:fe:4c:6b brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::54bd:8cff:fefe:4c6b/64 scope link
valid_lft forever preferred_lft forever
The address which should be working according to the istio documentation is http://192.168.49.2:32155/productpage (in my configuration), but this ends in a connection timeout.
Am I missing something?

CentOS VM(managed by Openstack) add a seconary IP but seconary IP cannot ping other host's IP

I'd like add a secondary IP address for 'eth0' from CentOS VM managed by Openstack. The result is: cannot ping another VM's IP from secondary IP. Could you help?
Steps to reproduce:
ip -f inet addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 172.22.42.220/24 brd 172.22.42.255 scope global noprefixroute dynamic eth0
valid_lft 83609sec preferred_lft 83609sec
ping -I 172.22.42.220 172.22.42.1 is OK
add a secondary IP by :ip -f inet addr add 172.22.42.222/32 brd 172.22.42.255 dev eth0
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 172.22.42.220/24 brd 172.22.42.255 scope global noprefixroute dynamic eth0
valid_lft 83368sec preferred_lft 83368sec
inet 172.22.42.222/32 brd 172.22.42.255 scope global eth0
valid_lft forever preferred_lft forever
ping -I 172.22.42.220 172.22.42.222 and ping -I 172.22.42.222 172.22.42.220 are OK (-I means source IP)
ping -I 172.22.42.220 172.22.42.1 is OK but ping -I 172.22.42.222 172.22.42.1 fails
you have to set the additional (multiple) ip address to the same port-id in openstack first.
Here is an example:
neutron port-update a5e93de7-927a-5402-b545-17f79538d3a6 --allowed_address_pairs list=true type=dict mac_address=ce:9e:5d:ad:6d:80,ip_address=10.101.11.5 ip_address=10.101.11.6
then you can check with :
neutron port-show a5e93de7-927a-5402-b545-17f79538d3a6
so, if you are aware of your openstack server instance name , then you can find the port id with :
openstack port list --server testserver01
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
| ID | Name | MAC Address | Fixed IP Addresses | Status |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+
| a5e93de7-927a-5402-b545-17f79538d3a6| | fa:16:3e:5d:73:24 | ip_address='10.10.0.1', subnet_id='89387a48-5c5e-4dd0-8e0a-2181c97ec19a' | ACTIVE |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------+--------+

CentOS 7.2 can not start dhcpd

I have a server:
[root#localhost network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: em1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 04:7d:7b:ad:94:e4 brd ff:ff:ff:ff:ff:ff
3: em2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 04:7d:7b:ad:94:e5 brd ff:ff:ff:ff:ff:ff
inet 103.57.111.1/29 brd 103.57.111.7 scope global em2
valid_lft forever preferred_lft forever
inet 103.57.111.2/29 brd 103.57.111.7 scope global secondary em2
valid_lft forever preferred_lft forever
inet 103.57.111.3/29 brd 103.57.111.7 scope global secondary em2
valid_lft forever preferred_lft forever
inet 103.57.111.4/29 brd 103.57.111.7 scope global secondary em2
valid_lft forever preferred_lft forever
inet 103.57.111.5/29 brd 103.57.111.7 scope global secondary em2
valid_lft forever preferred_lft forever
inet6 fe80::67d:7bff:fead:94e5/64 scope link
valid_lft forever preferred_lft forever
I installed dhcp server, yum install -y dhcp, and follow the configuration:
cat /etc/dhcp/dhcpd.conf :
default-lease-time 600;
max-lease-time 7200;
host passacaglia {
hardware ethernet 04:7d:7b:67:50:80;
fixed-address 45.117.42.5;
}
cat /etc/sysconfig/dhcpd:
DHCPDARGS=em2
all upper are the dhcp server configuration, the iptables is down, the selinux is off.
OS is CentOS 7.2.
when I start dhcpd:
[root#localhost network-scripts]# systemctl restart dhcpd
Job for dhcpd.service failed because the control process exited with error code. See "systemctl status dhcpd.service" and "journalctl -xe" for details.
[root#localhost network-scripts]# journalctl -xe
12月 19 00:26:59 localhost.localdomain dhcpd[18512]: on ftp.isc.org. Features have been added and other changes
12月 19 00:26:59 localhost.localdomain dhcpd[18512]: have been made to the base software release in order to make
12月 19 00:26:59 localhost.localdomain dhcpd[18512]: it work better with this distribution.
12月 19 00:26:59 localhost.localdomain dhcpd[18512]:
12月 19 00:26:59 localhost.localdomain dhcpd[18512]: Please report for this software via the CentOS Bugs Database:
12月 19 00:26:59 localhost.localdomain dhcpd[18512]: http://bugs.centos.org/
12月 19 00:26:59 localhost.localdomain dhcpd[18512]:
12月 19 00:26:59 localhost.localdomain dhcpd[18512]: exiting.
12月 19 00:26:59 localhost.localdomain systemd[1]: dhcpd.service: main process exited, code=exited, status=1/FAILURE
12月 19 00:26:59 localhost.localdomain systemd[1]: Failed to start DHCPv4 Server Daemon.
-- Subject: Unit dhcpd.service has failed
it seems there is something missing in your configuration. your subnet definition? there isn't also any interfaces listenining on that subnet.
subnet 45.117.42.0 netmask 255.255.255.0 {
option routers 45.117.42.1;
option subnet-mask 255.255.255.0;
option domain-search "tecmint.lan";
option domain-name-servers 45.117.42.1;
range 45.117.42.10 45.117.42.100;
range 45.117.42.120 45.117.42.200;
}

How to map interface name of container in docker-compose with network created?

I am setting up with docker-compose (version 1.21.1) 3 docker containers and two networks:
version: '2.1'
services:
app1:
build:
context: .
dockerfile: "Dockerfile"
depends_on:
- redis
networks:
- pub
- default
redis:
build:
context: "tests/redis"
networks:
- default
app2:
build:
context: "tests/app2"
networks:
- pub
- default
networks:
pub:
enable_ipv6: true
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
- subnet: "fe80::42:acff:fe10:ee04/64"
default:
In app1:
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
696: eth0#if697: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:10:ee:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.16.238.3/24 brd 172.16.238.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe10:ee03/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::3/64 scope link nodad
valid_lft forever preferred_lft forever
698: eth1#if699: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:f0:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.240.4/20 brd 192.168.255.255 scope global eth1
valid_lft forever preferred_lft forever
However, I want eth1 to support IPv6, or both eth0 and eth1.
The documentation doesn't mention anything about that, neither I could find an option in the network options.
Is there a way to do this?
I had to enable ipv6 for both networks and do subnetting to the IPv6 subnet.
For the CIDR part I used an online subnet calculator for IPv6, however I am not sure yet why this worked :p
This is the configuration that worked:
networks:
default:
enable_ipv6: true
ipam:
driver: default
config:
- subnet: "2001:db8:8000::/33"
pub:
enable_ipv6: true
ipam:
driver: default
config:
- subnet: "2001:db8::/33"

How to have two NIC in ubuntu server and not affecting dns?

In my Ubuntu server 16.04 I have this file: /etc/network/interfaces
# The primary network interface
auto eth0
iface eth0 inet dhcp
#auto eth1
#iface eth1 inet static
# address 10.0.0.41
# netmask 255.255.255.0
# network 10.0.0.0
# broadcast 10.0.0.255
# gateway 10.0.0.1
The eth0 is linked to dsl, if I uncomment the eth1 section to enable second NIC card, I can't ping remote server like yahoo.com:
ping yahoo.com
PING yahoo.com (98.138.253.109) 56(84) bytes of data.
From 10.0.0.41 icmp_seq=1 Destination Host Unreachable
From 10.0.0.41 icmp_seq=2 Destination Host Unreachable
From 10.0.0.41 icmp_seq=3 Destination Host Unreachable
found a solution: remove the gateway 10.0.0.1