Howto setup a `veth` virtual network

For veth to work, one end of the tunnel must be bridged with another interface. Since you want to keep this all virtual, you may bridge the vm1 end of the tunnel (vm2 is the other end of the tunnel) with a tap-type virtual interface, in a bridge called brm. Now you give IP addresses to brm and to vm2 (10.0.0.1 and 10.0.0.2, respectively), enable IPv4 forwarding by means of

echo 1 > /proc/sys/net/ipv4/ip_forward

bring all interfaces up, and add a route instructing the kernel how to reach IP addresses 10.0.0.0/24. That's all.

If you want to create more pairs, repeat the steps below with different subnets, for instance 10.0.1.0/24, 10.0.2.0/24, and so on. Since you enabled IPv4 forwarding and added appropriate routes to the kernel routing table, they will be able to talk to each other right away.

Also, remember that most of the commands you are using (brctl, ifconfig,...) are obsolete: the iproute2 suite has commands to do all of this, see below my use of the ip command.

This is a correct sequence of commands for the use of interfaces of type veth:

first create all required interfaces,

ip link add dev vm1 type veth peer name vm2
ip link set dev vm1 up
ip tuntap add tapm mode tap
ip link set dev tapm up
ip link add brm type bridge

Notice we did not bring up brm and vm2 because we have to assign them IP addresses, but we did bring up tapm and vm1, which is necessary to include them into the bridge brm. Now enslave the interfaces tapm and vm1 to the bridge brm,

ip link set tapm master brm
ip link set vm1 master brm

now give addresses to the bridge and to the remaining veth interface vm2,

ip addr add 10.0.0.1/24 dev brm
ip addr add 10.0.0.2/24 dev vm2

now bring vm2 and brm up,

ip link set brm up
ip link set vm2 up

There is no need to add the route to the subnet 10.0.0.0/24 explicitly, it is automatically generated,, you may check with ip route show. This results in

ping -c1 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 m

--- 10.0.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms

You can also do it backwards, i.e. from vm2 back to brm:

ping -I 10.0.0.2 -c1 10.0.0.1
PING 10.0.0.1 (10.0.0.1) from 10.0.0.2 : 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms

--- 10.0.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms

The most useful application of NICs of the veth kind is a network namespace, which is what is used in Linux containers (LXC). You start one called nnsm as follows

ip netns add nnsm

then we transfer vm2 to it,

ip link set vm2 netns nnsm 

we endow the new network namespace with a lo interface (absolutely necessary),

ip netns exec nnsm  ip link set dev lo up

we allow NATting in the main machine,

iptables -t nat -A POSTROUTING -o brm -j MASQUERADE
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

(if you are connected to the Internet via eth0, otherwise change accordingly), start a shell in the new network namespace,

ip netns exec nnsm xterm & 

and now, if you start typing in the new xterm, you will find you are in a separate virtual machine with IP address 10.0.0.2, but you can reach the Internet. The advantage of this is that the new network namespace has its own stack, which means, for instance, you can start a VPN in it while the rest of your pc is not on the VPN. This is the contraption LXCs are based on.

EDIT:

I made a mistake, bringing the vm2 interface brings it down and clears its address. Thus you need to add these commands, from within the xterm:

ip addr add 10.0.0.2/24 dev vm2
ip link set dev  vm2 up
ip route add default via 10.0.0.1
echo "nameserver 8.8.8.8" >> /etc/resolv.conf
echo "nameserver 8.8.4.4" >> /etc/resolv.conf

and now you can navigate from within xterm.

The ip commands can also be done before the xterm with

ip -netns nnsm addr add 10.0.0.2/24 dev vm2
ip -netns nnsm link set dev vm2 up
ip -netns nnsm route add default via 10.0.0.1