GRE point to point and AWS VPC
Table of Contents
This is a quick, and simple, post on setting up a GRE tunnel between two instances in an Amazon VPC. Not all that useful in the real world, but it is part of exploring creating overlay and other network models within an EC2 VPC. I can’t find the original link I was working off of, but there are several blog posts out there that discuss setting up a simple GRE point to point tunnel on Linux, such as this one.
First: AWS spot instances
I like spot instances, so that’s what I used in this example. You put in a bid and you can get EC2 instances for a fraction of the “regular” price. These instances can go away at any time depending on market forces, but I have found that they are perfect for testing and so far have not had one disappear out from under me–I’ve always terminated them myself after a few hours. Usually I start up a couple instances in the morning and shut them down at the end of the day. Costs about $0.02 for 2 or 3 m3.medium instances, if not less. Great for labs or training. Also I like the fact that it forces me to get used to destroying and rebuilding instances, and automating as much as possible to make the new instances ready as quickly as possible.
So create two AWS instances in the same VPC. In the example that follows they have the IPs of 172.22.1.{76,238}.
Configure interfaces
First we’ll manually create network interface files. The OS here is Ubuntu 14.04.
root@ip-172-22-1-238:/etc/network/interfaces.d# cat gre1.cfg
auto gre1
iface gre1 inet tunnel
mode gre
netmask 255.255.255.255
address 10.0.0.2
dstaddr 10.0.0.1
endpoint 172.22.1.76
local 172.22.1.238
ttl 255
And on the other node:
root@ip-172-22-1-76:/etc/network/interfaces.d# cat gre1.cfg
auto gre1
iface gre1 inet tunnel
mode gre
netmask 255.255.255.255
address 10.0.0.1
dstaddr 10.0.0.2
endpoint 172.22.1.238
local 172.22.1.76
ttl 255
Then start the interfaces.
$ # both instances
$ ifup gre1
And now you should see something like…
root@ip-172-22-1-76:/etc/network/interfaces.d# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.22.1.1 0.0.0.0 UG 0 0 0 eth0
10.0.0.2 0.0.0.0 255.255.255.255 UH 0 0 0 gre1
172.22.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
root@ip-172-22-1-76:/etc/network/interfaces.d# ip ro sh
default via 172.22.1.1 dev eth0
10.0.0.2 dev gre1 proto kernel scope link src 10.0.0.1
172.22.1.0/24 dev eth0 proto kernel scope link src 172.22.1.76
Iperf
Without GRE overlay:
# iperf -c 172.22.1.238
------------------------------------------------------------
Client connecting to 172.22.1.238, TCP port 5001
TCP window size: 325 KByte (default)
------------------------------------------------------------
[ 3] local 172.22.1.76 port 52187 connected with 172.22.1.238 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 464 MBytes 388 Mbits/sec
With GRE:
# iperf -c 10.0.0.2
------------------------------------------------------------
Client connecting to 10.0.0.2, TCP port 5001
TCP window size: 325 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.1 port 47969 connected with 10.0.0.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 462 MBytes 387 Mbits/sec
Surprisingly close. Perhaps too surprisingly.
tcpdump
If I ping one node from the other, and then tcpdump GRE on eth0…
root@ip-172-22-1-76:/etc/network/interfaces.d# sudo tcpdump -n -e -ttt -i eth0 proto gre
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
00:00:00.000000 02:28:2a:9e:bb:03 > 02:f4:ec:79:c6:7b, ethertype IPv4 (0x0800), length 122: 172.22.1.238 > 172.22.1.76: GREv0, proto IPv4 (0x0800), length 88: 10.0.0.2 > 10.0.0.1: ICMP echo request, id 2533, seq 1, length 64
00:00:00.000043 02:f4:ec:79:c6:7b > 02:28:2a:9e:bb:03, ethertype IPv4 (0x0800), length 122: 172.22.1.76 > 172.22.1.238: GREv0, proto IPv4 (0x0800), length 88: 10.0.0.1 > 10.0.0.2: ICMP echo reply, id 2533, seq 1, length 64
00:00:01.000261 02:28:2a:9e:bb:03 > 02:f4:ec:79:c6:7b, ethertype IPv4 (0x0800), length 122: 172.22.1.238 > 172.22.1.76: GREv0, proto IPv4 (0x0800), length 88: 10.0.0.2 > 10.0.0.1: ICMP echo request, id 2533, seq 2, length 64
00:00:00.000038 02:f4:ec:79:c6:7b > 02:28:2a:9e:bb:03, ethertype IPv4 (0x0800), length 122: 172.22.1.76 > 172.22.1.238: GREv0, proto IPv4 (0x0800), length 88: 10.0.0.1 > 10.0.0.2: ICMP echo reply, id 2533, seq 2, length 64
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel
Listening on gre1 for icmp:
root@ip-172-22-1-76:/etc/network/interfaces.d# sudo tcpdump -n -e -ttt -i gre1 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on gre1, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
00:00:00.000000 In ethertype IPv4 (0x0800), length 100: 10.0.0.2 > 10.0.0.1: ICMP echo request, id 2534, seq 1, length 64
00:00:00.000024 Out ethertype IPv4 (0x0800), length 100: 10.0.0.1 > 10.0.0.2: ICMP echo reply, id 2534, seq 1, length 64
00:00:00.998982 In ethertype IPv4 (0x0800), length 100: 10.0.0.2 > 10.0.0.1: ICMP echo request, id 2534, seq 2, length 64
00:00:00.000023 Out ethertype IPv4 (0x0800), length 100: 10.0.0.1 > 10.0.0.2: ICMP echo reply, id 2534, seq 2, length 64
^C
4 packets captured
4 packets received by filter
0 packets dropped by kernel
GRE overhead
GRE adds 24 bytes of overhead. So the GRE tunnel MTU is 8977, 24 less than eth0’s 9001 MTU. (Some EC2 instances support jumbo frames, and in this case my spot instances came up with an MTU of 9001, but this is not always the case, and is not always desirable.)
ubuntu@ip-172-22-1-76:~$ ip ad sh eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP group default qlen 1000
link/ether 02:f4:ec:79:c6:7b brd ff:ff:ff:ff:ff:ff
inet 172.22.1.76/24 brd 172.22.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f4:ecff:fe79:c67b/64 scope link
valid_lft forever preferred_lft forever
ubuntu@ip-172-22-1-76:~$ ip ad sh gre1
5: gre1@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8977 qdisc noqueue state UNKNOWN group default
link/gre 172.22.1.76 peer 172.22.1.238
inet 10.0.0.1 peer 10.0.0.2/32 scope global gre1
valid_lft forever preferred_lft forever
inet6 fe80::5efe:ac16:14c/64 scope link
valid_lft forever preferred_lft forever
Conclusion
This was just a simple exercise for me to get used to networking within AWS VPC, and to gradually work towards more complicated overlay (or other) models and software defined solutions.