[RFC,02/11] batman-adv: add basic bridge loop avoidance code

Message ID 1320015072-10313-3-git-send-email-siwu@hrz.tu-chemnitz.de (mailing list archive)
State RFC, archived
Headers

Commit Message

Simon Wunderlich Oct. 30, 2011, 10:51 p.m. UTC
  This second version of the bridge loop avoidance for batman-adv
avoids loops between the mesh and a backbone (usually a LAN).

By connecting multiple batman-adv mesh nodes to the same ethernet
segment a loop can be created when the soft-interface is bridged
into that ethernet segment. A simple visualization of the loop
involving the most common case - a LAN as ethernet segment:

node1  <-- LAN  -->  node2
  |                   |
wifi   <-- mesh -->  wifi

Packets from the LAN (e.g. ARP broadcasts) will circle forever from
node1 or node2 over the mesh back into the LAN.

With this patch, batman recognizes backbone gateways, nodes which are
part of the mesh and backbone/LAN at the same time. Each backbone
gateway "claims" clients from within the mesh to handle them
exclusively. By restricting that only responsible backbone gateways
may handle their claimed clients traffic, loops are effectively
avoided.

Signed-off-by: Simon Wunderlich <siwu@hrz.tu-chemnitz.de>

---
[2011-10-27] Changes suggested by Marek Lindner:

 * move claim types into packet.h
 * uint16_t -> short for vid in compare_*
 * remove refcount debugging
 * add comments to spinlocks to silence checkpatch --strict
 * move hardif_free_ref at the end of the function in bla_send_claim()
 * bla_del_claim(): move free ref up to make it more clear why we free
 * the reference
 * bla_purge_claims() does not need spinlock
 * move rcu_read_lock() more into the loops

Signed-off-by: Simon Wunderlich <siwu@hrz.tu-chemnitz.de>

[2011-10-27] add a type for the bla destinations

Signed-off-by: Simon Wunderlich <siwu@hrz.tu-chemnitz.de>

[2011-10-30] Changes suggested by Marek Lindner:

 * add BLA_CRC_INIT to show more visibly the initializiation of the CRC
 * put own_orig into bat_priv instead using a function which needlessly
   copies around
 * update backbone gws when primary if address changes
 * add bla_send_announce() as own function for readability
 * fix some comments
 * give hw_src and hw_dst as parameter in handle_*() and
   check_claim_group() to avoid code duplicates
 * change various debugging code
 * replace add_own_claim and del_own_claim with handle_claim() and
   handle_unclaim() to reduce code size
 * pimp up claim table debugfs output

VLAN fixes:
 * reset skb_mac_header(), it might not be set when its a VLAN frame
 * always become a backbone gw for a VLAN when there is traffic,
   even if there are no claims, to let at least let other backbone gw
   nodes that we are able to receive on that VLAN and avoid broadcast
   loops.

Signed-off-by: Simon Wunderlich <siwu@hrz.tu-chemnitz.de>
---
 Makefile.kbuild         |    1 +
 bat_sysfs.c             |    2 +-
 bridge_loop_avoidance.c | 1240 +++++++++++++++++++++++++++++++++++++++++++++++
 bridge_loop_avoidance.h |   30 ++
 compat.c                |    8 +
 compat.h                |    2 +
 hard-interface.c        |    8 +-
 main.c                  |    6 +
 main.h                  |    6 +-
 originator.c            |    1 +
 packet.h                |   16 +
 routing.c               |    6 +
 soft-interface.c        |   11 +
 types.h                 |   28 ++
 14 files changed, 1362 insertions(+), 3 deletions(-)
 create mode 100644 bridge_loop_avoidance.c
 create mode 100644 bridge_loop_avoidance.h
  

Comments

Marek Lindner Oct. 30, 2011, 11:20 p.m. UTC | #1
Hi,

>  * always become a backbone gw for a VLAN when there is traffic,
>    even if there are no claims, to let at least let other backbone gw
>    nodes that we are able to receive on that VLAN and avoid broadcast
>    loops.

could you give some more details on this change ? Why is it not necessary 
without VLANs ?

Regards,
Marek
  
Simon Wunderlich Oct. 31, 2011, midnight UTC | #2
Hi,

On Mon, Oct 31, 2011 at 12:20:24AM +0100, Marek Lindner wrote:
> 
> Hi,
> 
> >  * always become a backbone gw for a VLAN when there is traffic,
> >    even if there are no claims, to let at least let other backbone gw
> >    nodes that we are able to receive on that VLAN and avoid broadcast
> >    loops.
> 
> could you give some more details on this change ? Why is it not necessary 
> without VLANs ?

Actually it's also useful for cases without VLANs, e.g. if there has not
been any traffic yet. However if there is traffic it soon happens in both
directions and therefore claims are added and situation resolves automatically.

I had the situation that on some VLANs there were no traffic at all for some time
and when it suddenly started, the bla system did some weird things.

Cheers
	Simon
  
Marek Lindner Nov. 1, 2011, 10:52 a.m. UTC | #3
On Sunday, October 30, 2011 23:51:03 Simon Wunderlich wrote:
> +static const uint8_t announce_mac[6] = {0x43, 0x05, 0x43, 0x05, 0x00,
> 0x00};

All we ever use are 4 bytes - we could make this shorter. Otherwise please use 
ETH_ALEN. Please use ETH_ALEN throughout the code instead of 6.


> +       bat_dbg(DBG_BLA, bat_priv,
> +               "handle_announce(): ANNOUNCE vid %d (sent "
> +               "by %pM)... CRC = %04x (nw order)\n",
> +               vid, backbone_gw->orig, crc);

It is not network byte order anymore ..


+       /* TODO: we could cal something like tt_local_del() here. */

Why should we ?


> @@ -634,7 +640,7 @@ static int batman_skb_recv(struct sk_buff *skb, struct
> net_device *dev, case BAT_TT_QUERY:
>                 ret = recv_tt_query(skb, hard_iface);
>                 break;
> -               /* Roaming advertisement */
> +               /* bridge roop avoidance query */
>         case BAT_ROAM_ADV:
>                 ret = recv_roam_adv(skb, hard_iface);
>                 break;

Small typo here but why are you even changing the comment ?


>         atomic_t gw_reselect;
>         struct hard_iface __rcu *primary_if;  /* rcu protected pointer */
>         struct vis_info *my_vis_info;
> +       uint8_t own_orig[6];            /* cache primary hardifs address */
>  };

I'd call this "primary_addr" instead of "own_orig" to be consistent with 
"primary_if". Furthermore ETH_ALEN should be used for the length.


> +struct backbone_gw {
> +       uint8_t orig[ETH_ALEN];
> +       short vid;              /* used VLAN ID */
> +       struct hlist_node hash_entry;
> +       struct bat_priv *bat_priv;
> +       unsigned long lasttime; /* last time we heard of this backbone gw */
> +       atomic_t request_sent;
> +       atomic_t refcount;
> +       struct rcu_head rcu;
> +       uint16_t crc;           /* crc checksum over all claims */
> +} __packed;
> +
> +struct claim {
> +       uint8_t addr[ETH_ALEN];
> +       short vid;
> +       struct backbone_gw *backbone_gw;
> +       unsigned long lasttime; /* last time we heard of claim (locals only)
> */
> +       struct rcu_head rcu;
> +       atomic_t refcount;
> +       struct hlist_node hash_entry;
> +} __packed;

Why are these structs packed ?


Regards,
Marek
  
Simon Wunderlich Nov. 2, 2011, 11:01 a.m. UTC | #4
Hi,

On Tue, Nov 01, 2011 at 11:52:45AM +0100, Marek Lindner wrote:
> On Sunday, October 30, 2011 23:51:03 Simon Wunderlich wrote:
> > +static const uint8_t announce_mac[6] = {0x43, 0x05, 0x43, 0x05, 0x00,
> > 0x00};
> 
> All we ever use are 4 bytes - we could make this shorter. Otherwise please use 
> ETH_ALEN. Please use ETH_ALEN throughout the code instead of 6.
> 

OK, that's right, i'll change it to 4 and use ETH_ALEN at the other place where i missed it.
 
> > +       bat_dbg(DBG_BLA, bat_priv,
> > +               "handle_announce(): ANNOUNCE vid %d (sent "
> > +               "by %pM)... CRC = %04x (nw order)\n",
> > +               vid, backbone_gw->orig, crc);
> 
> It is not network byte order anymore ..
> 

OK

> +       /* TODO: we could cal something like tt_local_del() here. */
> 
> Why should we ?
> 
> 

Because when someone claims the adress, the the client has roamed to the mesh.

Usually this is handled by the tt roaming code, this will only fail in a limited
horizon scenario where a node does not know the other node the client roamed to.
In this case, the tt will time out after a long while.

This is just an idea/optimization to add the tt_local_del(). :)

> > @@ -634,7 +640,7 @@ static int batman_skb_recv(struct sk_buff *skb, struct
> > net_device *dev, case BAT_TT_QUERY:
> >                 ret = recv_tt_query(skb, hard_iface);
> >                 break;
> > -               /* Roaming advertisement */
> > +               /* bridge roop avoidance query */
> >         case BAT_ROAM_ADV:
> >                 ret = recv_roam_adv(skb, hard_iface);
> >                 break;
> 
> Small typo here but why are you even changing the comment ?
> 

Oh thanks, this is wrong. This comes from an earlier version of the BLA table
changes where I used a special packet type. 

The comment should not be changed.

> >         atomic_t gw_reselect;
> >         struct hard_iface __rcu *primary_if;  /* rcu protected pointer */
> >         struct vis_info *my_vis_info;
> > +       uint8_t own_orig[6];            /* cache primary hardifs address */
> >  };
> 
> I'd call this "primary_addr" instead of "own_orig" to be consistent with 
> "primary_if". Furthermore ETH_ALEN should be used for the length.
> 

OK.

> > +struct backbone_gw {
> > +       uint8_t orig[ETH_ALEN];
> > +       short vid;              /* used VLAN ID */
> > +       struct hlist_node hash_entry;
> > +       struct bat_priv *bat_priv;
> > +       unsigned long lasttime; /* last time we heard of this backbone gw */
> > +       atomic_t request_sent;
> > +       atomic_t refcount;
> > +       struct rcu_head rcu;
> > +       uint16_t crc;           /* crc checksum over all claims */
> > +} __packed;
> > +
> > +struct claim {
> > +       uint8_t addr[ETH_ALEN];
> > +       short vid;
> > +       struct backbone_gw *backbone_gw;
> > +       unsigned long lasttime; /* last time we heard of claim (locals only)
> > */
> > +       struct rcu_head rcu;
> > +       atomic_t refcount;
> > +       struct hlist_node hash_entry;
> > +} __packed;
> 
> Why are these structs packed ?

For similar reasons as the tt_global/local_entry issue: The hash functions both
access orig and vid from these structures.

However, as there are not that many sharing function I'll split it into
choose_claim()/choose_backbone_gw() and remove the __packed attribute.

In this case, it should be a little bit shorter and cleaner than using a common
structure (no container_of() stuff).

Thanks for the comments, you'll find the changed stuff as patches in my blaII_dirty repo.

Cheers,
	Simon
  

Patch

diff --git a/Makefile.kbuild b/Makefile.kbuild
index bd7e93c..d626513 100644
--- a/Makefile.kbuild
+++ b/Makefile.kbuild
@@ -51,3 +51,4 @@  batman-adv-y += translation-table.o
 batman-adv-y += unicast.o
 batman-adv-y += vis.o
 batman-adv-y += compat.o
+batman-adv-y += bridge_loop_avoidance.o
diff --git a/bat_sysfs.c b/bat_sysfs.c
index c25492f..ec2e437 100644
--- a/bat_sysfs.c
+++ b/bat_sysfs.c
@@ -390,7 +390,7 @@  BAT_ATTR_UINT(gw_sel_class, S_IRUGO | S_IWUSR, 1, TQ_MAX_VALUE,
 static BAT_ATTR(gw_bandwidth, S_IRUGO | S_IWUSR, show_gw_bwidth,
 		store_gw_bwidth);
 #ifdef CONFIG_BATMAN_ADV_DEBUG
-BAT_ATTR_UINT(log_level, S_IRUGO | S_IWUSR, 0, 7, NULL);
+BAT_ATTR_UINT(log_level, S_IRUGO | S_IWUSR, 0, 15, NULL);
 #endif
 
 static struct bat_attribute *mesh_attrs[] = {
diff --git a/bridge_loop_avoidance.c b/bridge_loop_avoidance.c
new file mode 100644
index 0000000..641dbcc
--- /dev/null
+++ b/bridge_loop_avoidance.c
@@ -0,0 +1,1240 @@ 
+/*
+ * Copyright (C) 2011 B.A.T.M.A.N. contributors:
+ *
+ * Simon Wunderlich
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+ * 02110-1301, USA
+ *
+ */
+
+#include "main.h"
+#include "hash.h"
+#include "hard-interface.h"
+#include "originator.h"
+#include "bridge_loop_avoidance.h"
+#include "send.h"
+
+#include <linux/etherdevice.h>
+#include <linux/crc16.h>
+#include <linux/if_arp.h>
+#include <net/arp.h>
+#include <linux/if_vlan.h>
+
+static const uint8_t claim_dest[6] = {0xff, 0x43, 0x05, 0x00, 0x00, 0x00};
+static const uint8_t announce_mac[6] = {0x43, 0x05, 0x43, 0x05, 0x00, 0x00};
+
+static void bla_periodic_work(struct work_struct *work);
+static void bla_send_announce(struct bat_priv *bat_priv,
+		struct backbone_gw *backbone_gw);
+
+/**
+ *
+ * @mac: MAC address of the client which is to be checked
+ * @vid: VID of the VLAN
+ *
+ * Returns 1 if the mac is already claimed by ourselves or another node.
+ */
+
+static inline uint32_t choose_claim(const void *data, uint32_t size)
+{
+	const unsigned char *key = data;
+	uint32_t hash = 0;
+	size_t i;
+
+	for (i = 0; i < ETH_ALEN + sizeof(short); i++) {
+		hash += key[i];
+		hash += (hash << 10);
+		hash ^= (hash >> 6);
+	}
+
+	hash += (hash << 3);
+	hash ^= (hash >> 11);
+	hash += (hash << 15);
+
+	return hash % size;
+}
+
+/* compares address and vid of two backbone gws */
+static int compare_backbone_gw(const struct hlist_node *node, const void *data2)
+{
+	const void *data1 = container_of(node, struct backbone_gw,
+					 hash_entry);
+
+	return (memcmp(data1, data2, ETH_ALEN + sizeof(short)) == 0 ? 1 : 0);
+}
+
+/* compares address and vid of two claims */
+static int compare_claim(const struct hlist_node *node, const void *data2)
+{
+	const void *data1 = container_of(node, struct claim,
+					 hash_entry);
+
+	return (memcmp(data1, data2, ETH_ALEN + sizeof(short)) == 0 ? 1 : 0);
+}
+
+/* free a backbone gw */
+static void backbone_gw_free_ref(struct backbone_gw *backbone_gw)
+{
+	if (atomic_dec_and_test(&backbone_gw->refcount))
+		kfree_rcu(backbone_gw, rcu);
+}
+
+/* finally deinitialize the claim */
+static void claim_free_rcu(struct rcu_head *rcu)
+{
+	struct claim *claim;
+
+	claim = container_of(rcu, struct claim, rcu);
+
+	backbone_gw_free_ref(claim->backbone_gw);
+	kfree(claim);
+}
+
+/* free a claim, call claim_free_rcu if its the last reference */
+static void claim_free_ref(struct claim *claim)
+{
+	if (atomic_dec_and_test(&claim->refcount))
+		call_rcu(&claim->rcu, claim_free_rcu);
+}
+
+/*
+ * @bat_priv: the bat priv with all the soft interface information
+ * @data: search data (may be local/static data)
+ *
+ * looks for a claim in the hash, and returns it if found
+ * or NULL otherwise.
+ */
+
+static struct claim *claim_hash_find(struct bat_priv *bat_priv,
+		struct claim *data)
+{
+	struct hashtable_t *hash = bat_priv->claim_hash;
+	struct hlist_head *head;
+	struct hlist_node *node;
+	struct claim *claim;
+	struct claim *claim_tmp = NULL;
+	int index;
+
+	if (!hash)
+		return NULL;
+
+	index = choose_claim(data, hash->size);
+	head = &hash->table[index];
+
+	rcu_read_lock();
+	hlist_for_each_entry_rcu(claim, node, head, hash_entry) {
+		if (!compare_claim(&claim->hash_entry, data))
+			continue;
+
+		if (!atomic_inc_not_zero(&claim->refcount))
+			continue;
+
+		claim_tmp = claim;
+		break;
+	}
+	rcu_read_unlock();
+
+	return claim_tmp;
+}
+
+/*
+ * @bat_priv: the bat priv with all the soft interface information
+ * @addr: the address of the originator
+ * @vid: the VLAN ID
+ *
+ * looks for a claim in the hash, and returns it if found
+ * or NULL otherwise.
+ */
+
+static struct backbone_gw *backbone_hash_find(struct bat_priv *bat_priv,
+		uint8_t *addr, short vid)
+{
+	struct hashtable_t *hash = bat_priv->backbone_hash;
+	struct hlist_head *head;
+	struct hlist_node *node;
+	struct backbone_gw search_entry, *backbone_gw;
+	struct backbone_gw *backbone_gw_tmp = NULL;
+	int index;
+
+	if (!hash)
+		return NULL;
+
+	memcpy(search_entry.orig, addr, ETH_ALEN);
+	search_entry.vid = vid;
+
+	/* *_claim() works for backbone gws too */
+	index = choose_claim(&search_entry, hash->size);
+	head = &hash->table[index];
+
+	rcu_read_lock();
+	hlist_for_each_entry_rcu(backbone_gw, node, head, hash_entry) {
+		if (!compare_backbone_gw(&backbone_gw->hash_entry,
+					&search_entry))
+			continue;
+
+		if (!atomic_inc_not_zero(&backbone_gw->refcount))
+			continue;
+
+		backbone_gw_tmp = backbone_gw;
+		break;
+	}
+	rcu_read_unlock();
+
+	return backbone_gw_tmp;
+}
+
+/* delete all claims for a backbone */
+static void bla_del_backbone_claims(struct backbone_gw *backbone_gw)
+{
+	struct hashtable_t *hash;
+	struct hlist_node *node, *node_tmp;
+	struct hlist_head *head;
+	struct claim *claim;
+	int i;
+	spinlock_t *list_lock;	/* protects write access to the hash lists */
+
+	hash = backbone_gw->bat_priv->claim_hash;
+	if (!hash)
+		return;
+
+	for (i = 0; i < hash->size; i++) {
+		head = &hash->table[i];
+		list_lock = &hash->list_locks[i];
+
+		spin_lock_bh(list_lock);
+		hlist_for_each_entry_safe(claim, node, node_tmp,
+					 head, hash_entry) {
+
+			if (claim->backbone_gw != backbone_gw)
+				continue;
+
+			claim_free_ref(claim);
+			hlist_del_rcu(node);
+		}
+		spin_unlock_bh(list_lock);
+	}
+
+	/* all claims gone, intialize CRC */
+	backbone_gw->crc = BLA_CRC_INIT;
+}
+
+/*
+ * @bat_priv: the bat priv with all the soft interface information
+ * @orig: the mac address to be announced within the claim
+ * @vid: the VLAN ID
+ * @claimtype: the type of the claim (CLAIM, UNCLAIM, ANNOUNCE, ...)
+ *
+ * sends a claim frame according to the provided info.
+ */
+
+static void bla_send_claim(struct bat_priv *bat_priv, uint8_t *mac,
+		short vid, int claimtype)
+{
+	struct sk_buff *skb;
+	struct ethhdr *ethhdr;
+	struct hard_iface *primary_if;
+	struct net_device *soft_iface;
+	uint8_t *hw_src;
+	struct bla_claim_dst local_claim_dest;
+	uint32_t zeroip = 0;
+
+	primary_if = primary_if_get_selected(bat_priv);
+	if (!primary_if)
+		return;
+
+	memcpy(&local_claim_dest, claim_dest,
+			sizeof(local_claim_dest));
+	local_claim_dest.type = claimtype;
+
+	soft_iface = primary_if->soft_iface;
+
+	skb = arp_create(ARPOP_REPLY, ETH_P_ARP,
+		zeroip,				/* IP DST: 0.0.0.0 */
+		primary_if->soft_iface,
+		zeroip,				/* IP SRC: 0.0.0.0 */
+		NULL,				/* Ethernet DST: Broadcast */
+		primary_if->net_dev->dev_addr,	/* Ethernet SRC/HW SRC:
+						 * originator mac */
+		(uint8_t *)&local_claim_dest	/* HW DST: FF:43:05:XX:00:00
+						 * with XX   = claim type */
+		);
+
+	if (!skb)
+		goto out;
+
+	ethhdr = (struct ethhdr *)skb->data;
+	hw_src = (uint8_t *) ethhdr
+		+ sizeof(struct ethhdr) + sizeof(struct arphdr);
+
+	/* now we pretend that the client would have sent this ... */
+	switch (claimtype) {
+	case CLAIM_TYPE_ADD:
+		/*
+		 * normal claim frame
+		 * set Ethernet SRC to the clients mac
+		 */
+		memcpy(ethhdr->h_source, mac, ETH_ALEN);
+		bat_dbg(DBG_BLA, bat_priv,
+			"bla_send_claim(): CLAIM %pM on vid %d\n", mac, vid);
+		break;
+	case CLAIM_TYPE_DEL:
+		/*
+		 * unclaim frame
+		 * set HW SRC to the clients mac
+		 */
+		memcpy(hw_src, mac, ETH_ALEN);
+		bat_dbg(DBG_BLA, bat_priv,
+			"bla_send_claim(): UNCLAIM %pM on vid %d\n", mac, vid);
+		break;
+	case CLAIM_TYPE_ANNOUNCE:
+		/*
+		 * announcement frame
+		 * set HW SRC to the special mac containg the crc
+		 */
+		memcpy(hw_src, mac, ETH_ALEN);
+		bat_dbg(DBG_BLA, bat_priv,
+			"bla_send_claim(): ANNOUNCE of %pM on vid %d\n",
+			ethhdr->h_source, vid);
+		break;
+	case CLAIM_TYPE_REQUEST:
+		/*
+		 * request frame
+		 * set HW SRC to the special mac containg the crc
+		 */
+		memcpy(hw_src, mac, ETH_ALEN);
+		memcpy(ethhdr->h_dest, mac, ETH_ALEN);
+		bat_dbg(DBG_BLA, bat_priv,
+			"bla_send_claim(): REQUEST of %pM to %pMon vid %d\n",
+			ethhdr->h_source, ethhdr->h_dest, vid);
+		break;
+
+	}
+
+	if (vid != -1)
+		skb = vlan_insert_tag(skb, vid);
+
+	skb_reset_mac_header(skb);
+	skb->protocol = eth_type_trans(skb, soft_iface);
+	bat_priv->stats.rx_packets++;
+	bat_priv->stats.rx_bytes += skb->len + sizeof(struct ethhdr);
+	soft_iface->last_rx = jiffies;
+
+	netif_rx(skb);
+out:
+	if (primary_if)
+		hardif_free_ref(primary_if);
+}
+
+/*
+ * @bat_priv: the bat priv with all the soft interface information
+ * @orig: the mac address of the originator
+ * @vid: the VLAN ID
+ *
+ * searches for the backbone gw or creates a new one if it could not
+ * be found.
+ */
+
+static struct backbone_gw *bla_get_backbone_gw(struct bat_priv *bat_priv,
+		uint8_t *orig, short vid)
+{
+	struct backbone_gw *entry;
+
+	entry = backbone_hash_find(bat_priv, orig, vid);
+
+	if (entry)
+		return entry;
+
+	bat_dbg(DBG_BLA, bat_priv,
+		"bla_get_backbone_gw(): not found (%pM, %d),"
+		" creating new entry\n", orig, vid);
+
+	entry = kzalloc(sizeof(*entry), GFP_ATOMIC);
+	if (!entry)
+		return NULL;
+
+	entry->vid = vid;
+	entry->lasttime = jiffies;
+	entry->crc = BLA_CRC_INIT;
+	entry->bat_priv = bat_priv;
+	atomic_set(&entry->request_sent, 0);
+	memcpy(entry->orig, orig, ETH_ALEN);
+
+	/* one for the hash, one for returning */
+	atomic_set(&entry->refcount, 2);
+
+	hash_add(bat_priv->backbone_hash, compare_backbone_gw, choose_claim,
+			entry, &entry->hash_entry);
+
+	return entry;
+}
+
+/*
+ * update or add the own backbone gw to make sure we announce
+ * where we receive other backbone gws
+ */
+void bla_update_own_backbone_gw(struct bat_priv *bat_priv, short vid)
+{
+	struct backbone_gw *backbone_gw;
+
+	backbone_gw = bla_get_backbone_gw(bat_priv, bat_priv->own_orig, vid);
+	backbone_gw->lasttime = jiffies;
+}
+
+/*
+ * @bat_priv: the bat priv with all the soft interface information
+ * @vid: the vid where the request came on
+ *
+ * Repeat all of our own claims, and finally send an ANNOUNCE frame
+ * to allow the requester another check if the CRC is correct now.
+ */
+
+static void bla_handle_request(struct bat_priv *bat_priv, short vid)
+{
+	struct hlist_node *node;
+	struct hlist_head *head;
+	struct hashtable_t *hash;
+	struct claim *claim;
+	struct backbone_gw *backbone_gw;
+	int i;
+
+	bat_dbg(DBG_BLA, bat_priv,
+		"bla_handle_request(): received a "
+		"claim request, send all of our own claims again\n");
+
+	backbone_gw = backbone_hash_find(bat_priv, bat_priv->own_orig, vid);
+
+	hash = bat_priv->claim_hash;
+	for (i = 0; i < hash->size; i++) {
+		head = &hash->table[i];
+
+		rcu_read_lock();
+		hlist_for_each_entry_rcu(claim, node,
+					 head, hash_entry) {
+			/* only own claims are interesting */
+			if (claim->backbone_gw != backbone_gw)
+				continue;
+
+			bla_send_claim(bat_priv, claim->addr, claim->vid,
+					CLAIM_TYPE_ADD);
+		}
+		rcu_read_unlock();
+	}
+
+	/* finally, send an announcement frame */
+	bla_send_announce(bat_priv, backbone_gw);
+	backbone_gw_free_ref(backbone_gw);
+}
+
+/*
+ * @backbone_gw: the backbone gateway from whom we are out of sync
+ *
+ * When the crc is wrong, ask the backbone gateway for a full table update.
+ * After the request, it will repeat all of his own claims and finally
+ * send an announcement claim with which we can check again.
+ */
+
+static void bla_send_request(struct backbone_gw *backbone_gw)
+{
+	/* first, remove all old entries */
+	bla_del_backbone_claims(backbone_gw);
+
+	bat_dbg(DBG_BLA, backbone_gw->bat_priv,
+		"Sending REQUEST to %pM\n",
+		backbone_gw->orig);
+
+	/* send request */
+	bla_send_claim(backbone_gw->bat_priv, backbone_gw->orig,
+			backbone_gw->vid, CLAIM_TYPE_REQUEST);
+
+	/* no local broadcasts should be sent or received, for now. */
+	if (!atomic_read(&backbone_gw->request_sent)) {
+		atomic_inc(&backbone_gw->bat_priv->bla_num_requests);
+		atomic_set(&backbone_gw->request_sent, 1);
+	}
+}
+
+/*
+ * @bat_priv: the bat priv with all the soft interface information
+ * @backbone_gw: our backbone gateway which should be announced
+ *
+ * This function sends an announcement. It is called from multiple
+ * places.
+ */
+static void bla_send_announce(struct bat_priv *bat_priv,
+		struct backbone_gw *backbone_gw)
+{
+	uint8_t mac[6];
+	uint16_t crc;
+
+	memcpy(mac, announce_mac, 4);
+	crc = htons(backbone_gw->crc);
+	memcpy(&mac[4], (uint8_t *) &crc, 2);
+
+	bla_send_claim(bat_priv, mac, backbone_gw->vid, CLAIM_TYPE_ANNOUNCE);
+
+}
+
+/*
+ * @bat_priv: the bat priv with all the soft interface information
+ * @mac: the mac address of the claim
+ * @vid: the VLAN ID of the frame
+ * @backbone_gw: the backbone gateway which claims it
+ *
+ * Adds a claim in the claim hash.
+ */
+
+static void bla_add_claim(struct bat_priv *bat_priv, const uint8_t *mac,
+		const short vid, struct backbone_gw *backbone_gw)
+{
+	struct claim *claim;
+	struct claim search_claim;
+
+	memcpy(search_claim.addr, mac, ETH_ALEN);
+	search_claim.vid = vid;
+	claim = claim_hash_find(bat_priv, &search_claim);
+
+	/* create a new claim entry if it does not exist yet. */
+	if (!claim) {
+		claim = kzalloc(sizeof(*claim), GFP_ATOMIC);
+		if (!claim)
+			return;
+
+		memcpy(claim->addr, mac, ETH_ALEN);
+		claim->vid = vid;
+		claim->lasttime = jiffies;
+		claim->backbone_gw = backbone_gw;
+
+		atomic_set(&claim->refcount, 2);
+		bat_dbg(DBG_BLA, bat_priv,
+			"bla_add_claim(): adding new entry %pM, vid %d to hash ...\n",
+			mac, vid);
+		hash_add(bat_priv->claim_hash, compare_claim, choose_claim,
+			claim, &claim->hash_entry);
+	} else {
+		claim->lasttime = jiffies;
+		if (claim->backbone_gw == backbone_gw)
+			/* no need to register a new backbone */
+			goto claim_free_ref;
+
+		bat_dbg(DBG_BLA, bat_priv,
+			"bla_add_claim(): changing ownership for %pM, vid %d\n",
+			mac, vid);
+
+		claim->backbone_gw->crc ^=
+			crc16(0, claim->addr, ETH_ALEN);
+		backbone_gw_free_ref(claim->backbone_gw);
+
+	}
+	/* set (new) backbone gw */
+	atomic_inc(&backbone_gw->refcount);
+	claim->backbone_gw = backbone_gw;
+
+	backbone_gw->crc ^= crc16(0, claim->addr, ETH_ALEN);
+	backbone_gw->lasttime = jiffies;
+
+claim_free_ref:
+	claim_free_ref(claim);
+}
+
+/*
+ * Delete a claim from the claim hash which has the
+ * given mac address and vid.
+ */
+static void bla_del_claim(struct bat_priv *bat_priv, const uint8_t *mac,
+		const short vid)
+{
+	struct claim search_claim, *claim;
+
+	memcpy(search_claim.addr, mac, ETH_ALEN);
+	search_claim.vid = vid;
+	claim = claim_hash_find(bat_priv, &search_claim);
+	if (!claim)
+		return;
+
+	bat_dbg(DBG_BLA, bat_priv, "bla_del_claim(): %pM, vid %d\n", mac, vid);
+
+	hash_remove(bat_priv->claim_hash, compare_claim, choose_claim,
+		    claim);
+	claim_free_ref(claim); /* reference from the hash is gone */
+
+	claim->backbone_gw->crc ^= crc16(0, claim->addr, ETH_ALEN);
+
+	/* don't need the reference from hash_find() anymore */
+	claim_free_ref(claim);
+}
+
+/* check for ANNOUNCE frame, return 1 if handled */
+static int handle_announce(struct bat_priv *bat_priv,
+		uint8_t *an_addr, uint8_t *backbone_addr, short vid)
+{
+	struct backbone_gw *backbone_gw;
+	uint16_t crc;
+
+	if (memcmp(an_addr, announce_mac, 4) != 0)
+		return 0;
+
+	backbone_gw = bla_get_backbone_gw(bat_priv, backbone_addr, vid);
+
+	if (unlikely(!backbone_gw))
+		return 1;
+
+
+	/* handle as ANNOUNCE frame */
+	backbone_gw->lasttime = jiffies;
+	crc = ntohs(*((uint16_t *) (&an_addr[4])));
+
+	bat_dbg(DBG_BLA, bat_priv,
+		"handle_announce(): ANNOUNCE vid %d (sent "
+		"by %pM)... CRC = %04x (nw order)\n",
+		vid, backbone_gw->orig, crc);
+
+	if (backbone_gw->crc != crc) {
+		bat_dbg(DBG_BLA, backbone_gw->bat_priv,
+			"handle_announce(): CRC FAILED for %pM/%d"
+			"(my = %04x, sent = %04x)\n",
+			backbone_gw->orig, backbone_gw->vid,
+			backbone_gw->crc, crc);
+
+		bla_send_request(backbone_gw);
+	} else {
+		/* if we have sent a request and the crc was OK,
+		 * we can allow traffic again. */
+		if (atomic_read(&backbone_gw->request_sent)) {
+			atomic_dec(&backbone_gw->bat_priv->bla_num_requests);
+			atomic_set(&backbone_gw->request_sent, 0);
+		}
+	}
+
+	backbone_gw_free_ref(backbone_gw);
+	return 1;
+}
+
+/* check for REQUEST frame, return 1 if handled */
+static int handle_request(struct bat_priv *bat_priv,
+		uint8_t *backbone_addr, struct ethhdr *ethhdr, short vid)
+{
+	/* check for REQUEST frame */
+	if (!compare_eth(backbone_addr, ethhdr->h_dest))
+		return 0;
+
+	/* sanity check, this should not happen on a normal switch,
+	 * we ignore it in this case. */
+	if (!compare_eth(ethhdr->h_dest, bat_priv->own_orig))
+		return 1;
+
+	bat_dbg(DBG_BLA, bat_priv,
+		"handle_request(): REQUEST vid %d (sent "
+		"by %pM)...\n",
+		vid, ethhdr->h_source);
+
+	bla_handle_request(bat_priv, vid);
+	return 1;
+}
+
+/* check for UNCLAIM frame, return 1 if handled */
+static int handle_unclaim(struct bat_priv *bat_priv,
+		uint8_t *backbone_addr, uint8_t *claim_addr, short vid)
+{
+	struct backbone_gw *backbone_gw;
+
+	/* unclaim in any case if it is our own */
+	if (compare_eth(backbone_addr, bat_priv->own_orig))
+		bla_send_claim(bat_priv, claim_addr, vid, CLAIM_TYPE_DEL);
+
+	backbone_gw = backbone_hash_find(bat_priv, backbone_addr, vid);
+
+	if (!backbone_gw)
+		return 0;
+
+	/* this must be an UNCLAIM frame */
+	bat_dbg(DBG_BLA, bat_priv, "handle_unclaim():"
+		"UNCLAIM %pM on vid %d (sent by %pM)...\n",
+		claim_addr, vid, backbone_gw->orig);
+
+	bla_del_claim(bat_priv, claim_addr, vid);
+	backbone_gw_free_ref(backbone_gw);
+	return 1;
+}
+
+/* check for CLAIM frame, return 1 if handled */
+static int handle_claim(struct bat_priv *bat_priv,
+		uint8_t *backbone_addr, uint8_t *claim_addr, short vid)
+{
+	struct backbone_gw *backbone_gw;
+
+	/* register the gateway if not yet available, and add the claim. */
+
+	backbone_gw = bla_get_backbone_gw(bat_priv, backbone_addr, vid);
+
+	if (unlikely(!backbone_gw))
+		return 0;
+
+	/* this must be a CLAIM frame */
+	bla_add_claim(bat_priv, claim_addr, vid, backbone_gw);
+	if (compare_eth(backbone_addr, bat_priv->own_orig))
+		bla_send_claim(bat_priv, claim_addr, vid, CLAIM_TYPE_ADD);
+
+	/* TODO: we could cal something like tt_local_del() here. */
+
+	backbone_gw_free_ref(backbone_gw);
+	return 1;
+}
+
+/*
+ * @bat_priv: the bat priv with all the soft interface information
+ * @skb: the frame to be checked
+ *
+ * Check if this is a claim frame, and process it accordingly.
+ *
+ * returns 1 if it was a claim frame, otherwise return 0 to
+ * tell the callee that it can use the frame on its own.
+ */
+
+static int bla_process_claim(struct bat_priv *bat_priv, struct sk_buff *skb)
+{
+	struct ethhdr *ethhdr;
+	struct vlan_ethhdr *vhdr;
+	struct arphdr *arphdr;
+	uint8_t *hw_src, *hw_dst;
+	struct bla_claim_dst *bla_dst;
+	uint16_t proto;
+	int headlen;
+	short vid = -1;
+
+	ethhdr = (struct ethhdr *)skb_mac_header(skb);
+
+	if (ntohs(ethhdr->h_proto) == ETH_P_8021Q) {
+		vhdr = (struct vlan_ethhdr *) ethhdr;
+		vid = ntohs(vhdr->h_vlan_TCI) & VLAN_VID_MASK;
+		proto = ntohs(vhdr->h_vlan_encapsulated_proto);
+		headlen = sizeof(*vhdr);
+	} else {
+		proto = ntohs(ethhdr->h_proto);
+		headlen = sizeof(*ethhdr);
+	}
+
+	if (proto != ETH_P_ARP)
+		return 0; /* not a claim frame */
+
+	/* this must be a ARP frame. check if it is a claim. */
+
+	if (unlikely(!pskb_may_pull(skb, headlen + arp_hdr_len(skb->dev))))
+		return 0;
+
+	arphdr = (struct arphdr *) ((uint8_t *)ethhdr + headlen);
+
+	/* Check whether the ARP frame carries a valid
+	 * IP information */
+
+	if (arphdr->ar_hrd != htons(ARPHRD_ETHER))
+		return 0;
+	if (arphdr->ar_pro != htons(ETH_P_IP))
+		return 0;
+	if (arphdr->ar_hln != ETH_ALEN)
+		return 0;
+	if (arphdr->ar_pln != 4)
+		return 0;
+
+	hw_src = (uint8_t *)arphdr + sizeof(struct arphdr);
+	hw_dst = hw_src + ETH_ALEN + 4;
+	bla_dst = (struct bla_claim_dst *) hw_dst;
+
+	/* check if it is a claim frame. */
+	if (memcmp(hw_dst, claim_dest, 3) != 0)
+		return 0;
+
+	/* become a backbone gw ourselves on this vlan if not happened yet */
+	bla_update_own_backbone_gw(bat_priv, vid);
+
+	/* check for the different types of claim frames ... */
+	switch (bla_dst->type) {
+	case CLAIM_TYPE_ADD:
+		if (handle_claim(bat_priv, hw_src, ethhdr->h_source, vid))
+			return 1;
+		break;
+	case CLAIM_TYPE_DEL:
+		if (handle_unclaim(bat_priv, ethhdr->h_source, hw_src, vid))
+			return 1;
+		break;
+
+	case CLAIM_TYPE_ANNOUNCE:
+		if (handle_announce(bat_priv, hw_src, ethhdr->h_source, vid))
+			return 1;
+		break;
+	case CLAIM_TYPE_REQUEST:
+		if (handle_request(bat_priv, hw_src, ethhdr, vid))
+			return 1;
+		break;
+	}
+
+	bat_dbg(DBG_BLA, bat_priv, "bla_process_claim(): ERROR - this looks"
+		"like a claim frame, but is useless. eth src"
+		"%pM on vid %d ...(hw_src %pM, hw_dst %pM)\n",
+		ethhdr->h_source, vid, hw_src, hw_dst);
+	return 1;
+}
+
+/*
+ * Update the backbone gateways when the own orig address changes.
+ */
+void bla_update_orig_address(struct bat_priv *bat_priv, uint8_t *newaddr)
+{
+	struct backbone_gw *backbone_gw;
+	struct hlist_node *node;
+	struct hlist_head *head;
+	struct hashtable_t *hash;
+	int i;
+
+	hash = bat_priv->backbone_hash;
+	if (!hash)
+		return;
+
+	for (i = 0; i < hash->size; i++) {
+		head = &hash->table[i];
+
+		rcu_read_lock();
+		hlist_for_each_entry_rcu(backbone_gw, node, head, hash_entry) {
+			/* own orig still holds the old value. */
+			if (!compare_eth(backbone_gw->orig,
+						bat_priv->own_orig))
+				continue;
+
+			memcpy(backbone_gw->orig, newaddr, ETH_ALEN);
+			/* send an announce frame so others will ask for our
+			 * claims and update their tables. */
+			bla_send_announce(bat_priv, backbone_gw);
+		}
+		rcu_read_unlock();
+	}
+}
+
+/*
+ * Check when we last heard from other nodes, and remove them in case of
+ * a time out.
+ */
+static void bla_purge_backbone_gw(struct bat_priv *bat_priv)
+{
+	struct backbone_gw *backbone_gw;
+	struct hlist_node *node, *node_tmp;
+	struct hlist_head *head;
+	struct hashtable_t *hash;
+	spinlock_t *list_lock;	/* protects write access to the hash lists */
+	int i;
+
+	hash = bat_priv->backbone_hash;
+	if (!hash)
+		return;
+
+	for (i = 0; i < hash->size; i++) {
+		head = &hash->table[i];
+		list_lock = &hash->list_locks[i];
+
+		spin_lock_bh(list_lock);
+		hlist_for_each_entry_safe(backbone_gw, node, node_tmp,
+					  head, hash_entry) {
+			if (!time_after(jiffies, backbone_gw->lasttime
+				+ msecs_to_jiffies(BLA_BACKBONE_TIMEOUT)))
+				continue;
+
+			bat_dbg(DBG_BLA, backbone_gw->bat_priv,
+				"bla_purge_backbone_gw(): backbone gw %pM"
+				" timed out\n",	backbone_gw->orig);
+
+			/* don't wait for the pending request anymore */
+			if (atomic_read(&backbone_gw->request_sent))
+				atomic_dec(&bat_priv->bla_num_requests);
+
+			bla_del_backbone_claims(backbone_gw);
+
+			hlist_del_rcu(node);
+			backbone_gw_free_ref(backbone_gw);
+		}
+		spin_unlock_bh(list_lock);
+	}
+}
+
+/*
+ * Check when we heard last time from our own claims, and remove them in case of
+ * a time out
+ */
+static void bla_purge_claims(struct bat_priv *bat_priv)
+{
+	struct claim *claim;
+	struct hlist_node *node;
+	struct hlist_head *head;
+	struct hashtable_t *hash;
+	int i;
+
+	hash = bat_priv->claim_hash;
+	if (!hash)
+		return;
+
+	for (i = 0; i < hash->size; i++) {
+		head = &hash->table[i];
+
+		rcu_read_lock();
+		hlist_for_each_entry_rcu(claim, node, head, hash_entry) {
+			if (!compare_eth(claim->backbone_gw->orig,
+						bat_priv->own_orig))
+				continue;
+			if (!time_after(jiffies, claim->lasttime
+				+ msecs_to_jiffies(BLA_CLAIM_TIMEOUT)))
+				continue;
+
+			bat_dbg(DBG_BLA, bat_priv,
+				"bla_purge_claims(): %pM, vid %d, time out\n",
+				claim->addr, claim->vid);
+
+			handle_unclaim(bat_priv, bat_priv->own_orig,
+					claim->addr, claim->vid);
+		}
+		rcu_read_unlock();
+	}
+}
+
+/* (re)start the timer */
+static void bla_start_timer(struct bat_priv *bat_priv)
+{
+	INIT_DELAYED_WORK(&bat_priv->bla_work, bla_periodic_work);
+	queue_delayed_work(bat_event_workqueue, &bat_priv->bla_work,
+			   msecs_to_jiffies(BLA_PERIOD_LENGTH));
+}
+
+/*
+ * periodic work to do:
+ *  * purge structures when they are too old
+ *  * send announcements
+ */
+
+static void bla_periodic_work(struct work_struct *work)
+{
+	struct delayed_work *delayed_work =
+		container_of(work, struct delayed_work, work);
+	struct bat_priv *bat_priv =
+		container_of(delayed_work, struct bat_priv, bla_work);
+	struct hlist_node *node;
+	struct hlist_head *head;
+	struct backbone_gw *backbone_gw;
+	struct hashtable_t *hash;
+	int i;
+
+	bla_purge_claims(bat_priv);
+	bla_purge_backbone_gw(bat_priv);
+
+	if (!atomic_read(&bat_priv->bridge_loop_avoidance))
+		goto timer;
+
+	hash = bat_priv->backbone_hash;
+	if (!hash)
+		return;
+
+	for (i = 0; i < hash->size; i++) {
+		head = &hash->table[i];
+
+		rcu_read_lock();
+		hlist_for_each_entry_rcu(backbone_gw, node, head, hash_entry) {
+			if (!compare_eth(backbone_gw->orig, bat_priv->own_orig))
+				continue;
+
+			backbone_gw->lasttime = jiffies;
+
+			bla_send_announce(bat_priv, backbone_gw);
+		}
+		rcu_read_unlock();
+	}
+timer:
+	bla_start_timer(bat_priv);
+}
+
+/* initialize all bla structures */
+int bla_init(struct bat_priv *bat_priv)
+{
+	bat_dbg(DBG_BLA, bat_priv, "bla hash registering\n");
+
+	if (bat_priv->claim_hash)
+		return 1;
+
+	bat_priv->claim_hash = hash_new(128);
+	bat_priv->backbone_hash = hash_new(32);
+
+	if (!bat_priv->claim_hash || !bat_priv->backbone_hash)
+		return -1;
+
+	bat_dbg(DBG_BLA, bat_priv, "bla hashes initialized\n");
+
+	bla_start_timer(bat_priv);
+	return 1;
+}
+
+/**
+ * @skb: the frame to be checked
+ * @orig_node: the orig_node of the frame
+ * @hdr_size: maximum length of the frame
+ *
+ * bla_is_backbone_gw inspects the skb for the VLAN ID and returns 1
+ * if the orig_node is also a gateway on the soft interface, otherwise it
+ * returns 0.
+ *
+ **/
+
+int bla_is_backbone_gw(struct sk_buff *skb,
+		struct orig_node *orig_node, int hdr_size)
+{
+	struct ethhdr *ethhdr;
+	struct vlan_ethhdr *vhdr;
+	struct backbone_gw *backbone_gw;
+	short vid = -1;
+
+	if (!atomic_read(&orig_node->bat_priv->bridge_loop_avoidance))
+		return 0;
+
+	/* first, find out the vid. */
+	if (!pskb_may_pull(skb, hdr_size + sizeof(struct ethhdr)))
+		return 0;
+
+	ethhdr = (struct ethhdr *) (((uint8_t *)skb->data) + hdr_size);
+
+	if (ntohs(ethhdr->h_proto) == ETH_P_8021Q) {
+		if (!pskb_may_pull(skb, hdr_size + sizeof(struct vlan_ethhdr)))
+			return 0;
+
+		vhdr = (struct vlan_ethhdr *) ethhdr;
+		vid = ntohs(vhdr->h_vlan_TCI) & VLAN_VID_MASK;
+	}
+
+	/* see if this originator is a backbone gw for this VLAN */
+
+	backbone_gw = backbone_hash_find(orig_node->bat_priv,
+			orig_node->orig, vid);
+	if (!backbone_gw)
+		return 0;
+
+	bat_dbg(DBG_BLA, orig_node->bat_priv,
+		"bla_is_backbone_gw(): originator %pM has a claim "
+		"for vid %d on the LAN ...\n", orig_node->orig, vid);
+	backbone_gw_free_ref(backbone_gw);
+	return 1;
+}
+
+/* free all bla structures (for softinterface free or module unload) */
+void bla_free(struct bat_priv *bat_priv)
+{
+	struct claim *claim;
+	struct backbone_gw *backbone_gw;
+	struct hlist_node *node, *node_tmp;
+	struct hlist_head *head;
+	struct hashtable_t *hash;
+	spinlock_t *list_lock;	/* protects write access to the hash lists */
+	int i;
+
+	cancel_delayed_work_sync(&bat_priv->bla_work);
+	atomic_set(&bat_priv->bla_num_requests, 0);
+
+	hash = bat_priv->claim_hash;
+	if (!hash)
+		goto free_backbone_hash;
+
+
+	for (i = 0; i < hash->size; i++) {
+		head = &hash->table[i];
+		list_lock = &hash->list_locks[i];
+
+		spin_lock_bh(list_lock);
+		hlist_for_each_entry_safe(claim, node, node_tmp,
+					  head, hash_entry) {
+			hlist_del_rcu(node);
+			claim_free_ref(claim);
+		}
+		spin_unlock_bh(list_lock);
+	}
+
+	hash_destroy(hash);
+	bat_priv->claim_hash = NULL;
+free_backbone_hash:
+	hash = bat_priv->backbone_hash;
+	if (!hash)
+		return;
+
+	for (i = 0; i < hash->size; i++) {
+		head = &hash->table[i];
+		list_lock = &hash->list_locks[i];
+
+		spin_lock_bh(list_lock);
+		hlist_for_each_entry_safe(backbone_gw, node, node_tmp,
+					  head, hash_entry) {
+			hlist_del_rcu(node);
+			backbone_gw_free_ref(backbone_gw);
+		}
+		spin_unlock_bh(list_lock);
+	}
+
+	hash_destroy(hash);
+	bat_priv->backbone_hash = NULL;
+
+}
+
+/**
+ * @bat_priv: the bat priv with all the soft interface information
+ * @skb: the frame to be checked
+ * @vid: the VLAN ID of the frame
+ *
+ * bla_rx avoidance checks if:
+ *  * we have to race for a claim
+ *  * if the frame is allowed on the LAN
+ *
+ * in these cases, the skb is further handled by this function and
+ * returns 1, otherwise it returns 0 and the caller shall further
+ * process the skb.
+ *
+ **/
+
+int bla_rx(struct bat_priv *bat_priv, struct sk_buff *skb, short vid)
+{
+	struct ethhdr *ethhdr;
+	struct claim search_claim, *claim = NULL;
+
+	ethhdr = (struct ethhdr *)skb_mac_header(skb);
+
+	if (!atomic_read(&bat_priv->bridge_loop_avoidance))
+		goto allow;
+
+
+	if (unlikely(atomic_read(&bat_priv->bla_num_requests)))
+		/* don't allow broadcasts while requests are in flight */
+		if (is_multicast_ether_addr(ethhdr->h_dest))
+			goto handled;
+
+	memcpy(search_claim.addr, ethhdr->h_source, ETH_ALEN);
+	search_claim.vid = vid;
+	claim = claim_hash_find(bat_priv, &search_claim);
+
+	if (!claim) {
+		/* possible optimization: race for a claim */
+		/* No claim exists yet, claim it for us! */
+		handle_claim(bat_priv, bat_priv->own_orig,
+				ethhdr->h_source, vid);
+		goto allow;
+	}
+
+	/* if it is our own claim ... */
+	if (compare_eth(claim->backbone_gw->orig, bat_priv->own_orig)) {
+		/* ... allow it in any case */
+		claim->lasttime = jiffies;
+		goto allow;
+	}
+
+	/* if it is a broadcast ... */
+	if (is_multicast_ether_addr(ethhdr->h_dest)) {
+		/* ... drop it. the responsible gateway is in charge. */
+		goto handled;
+	} else {
+		/* seems the client considers us as its best gateway.
+		 * send a claim and update the claim table
+		 * immediately. */
+		handle_claim(bat_priv, bat_priv->own_orig,
+				ethhdr->h_source, vid);
+		goto allow;
+	}
+allow:
+	bla_update_own_backbone_gw(bat_priv, vid);
+	if (claim)
+		claim_free_ref(claim);
+	return 0;
+handled:
+	kfree_skb(skb);
+	if (claim)
+		claim_free_ref(claim);
+	return 1;
+}
+
+/**
+ * @bat_priv: the bat priv with all the soft interface information
+ * @skb: the frame to be checked
+ * @vid: the VLAN ID of the frame
+ *
+ * bla_tx checks if:
+ *  * a claim was received which has to be processed
+ *  * the frame is allowed on the mesh
+ *
+ * in these cases, the skb is further handled by this function and
+ * returns 1, otherwise it returns 0 and the caller shall further
+ * process the skb.
+ *
+ **/
+
+int bla_tx(struct bat_priv *bat_priv, struct sk_buff *skb, short vid)
+{
+	struct ethhdr *ethhdr;
+	struct claim search_claim, *claim = NULL;
+
+	if (!atomic_read(&bat_priv->bridge_loop_avoidance))
+		goto allow;
+
+	/* in VLAN case, the mac header might not be set. */
+	skb_reset_mac_header(skb);
+
+	if (bla_process_claim(bat_priv, skb))
+		goto handled;
+
+	ethhdr = (struct ethhdr *)skb_mac_header(skb);
+
+	if (unlikely(atomic_read(&bat_priv->bla_num_requests)))
+		/* don't allow broadcasts while requests are in flight */
+		if (is_multicast_ether_addr(ethhdr->h_dest))
+			goto handled;
+
+	memcpy(search_claim.addr, ethhdr->h_source, ETH_ALEN);
+	search_claim.vid = vid;
+
+	bat_dbg(DBG_BLA, bat_priv,
+		"bla_tx(): checked for claims, now starting ..."
+		"%pM -> %pM [%04x], vid %d\n",
+		ethhdr->h_source, ethhdr->h_dest,
+		ntohs(ethhdr->h_proto), search_claim.vid);
+	claim = claim_hash_find(bat_priv, &search_claim);
+
+	/* if no claim exists, allow it. */
+	if (!claim)
+		goto allow;
+
+	/* check if we are responsible. */
+	if (compare_eth(claim->backbone_gw->orig, bat_priv->own_orig)) {
+		/* if yes, the client has roamed and we have
+		 * to unclaim it. */
+		handle_unclaim(bat_priv, bat_priv->own_orig,
+				ethhdr->h_source, vid);
+		goto allow;
+	}
+
+	/* check if it is a multicast/broadcast frame */
+	if (is_multicast_ether_addr(ethhdr->h_dest)) {
+		/* drop it. the responsible gateway has forwarded it into
+		 * the backbone network. */
+		goto handled;
+	} else {
+		/* we must allow it. at least if we are
+		 * responsible for the DESTINATION. */
+		goto allow;
+	}
+allow:
+	bla_update_own_backbone_gw(bat_priv, vid);
+	if (claim)
+		claim_free_ref(claim);
+	return 0;
+handled:
+	if (claim)
+		claim_free_ref(claim);
+	return 1;
+}
diff --git a/bridge_loop_avoidance.h b/bridge_loop_avoidance.h
new file mode 100644
index 0000000..8df5036
--- /dev/null
+++ b/bridge_loop_avoidance.h
@@ -0,0 +1,30 @@ 
+/*
+ * Copyright (C) 2011 B.A.T.M.A.N. contributors:
+ *
+ * Simon Wunderlich
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of version 2 of the GNU General Public
+ * License as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
+ * 02110-1301, USA
+ *
+ */
+
+int bla_rx(struct bat_priv *bat_priv, struct sk_buff *skb, short vid);
+int bla_tx(struct bat_priv *bat_priv, struct sk_buff *skb, short vid);
+int bla_is_backbone_gw(struct sk_buff *skb,
+		struct orig_node *orig_node, int hdr_size);
+void bla_update_orig_address(struct bat_priv *bat_priv, uint8_t *newaddr);
+int bla_init(struct bat_priv *bat_priv);
+void bla_free(struct bat_priv *bat_priv);
+
+#define BLA_CRC_INIT	0
diff --git a/compat.c b/compat.c
index 4c12468..5b389b2 100644
--- a/compat.c
+++ b/compat.c
@@ -28,4 +28,12 @@  void free_rcu_tt_local_entry(struct rcu_head *rcu)
 	kfree(tt_local_entry);
 }
 
+void free_rcu_backbone_gw(struct rcu_head *rcu)
+{
+	struct backbone_gw *backbone_gw;
+
+	backbone_gw = container_of(rcu, struct backbone_gw, rcu);
+	kfree(backbone_gw);
+}
+
 #endif /* < KERNEL_VERSION(3, 0, 0) */
diff --git a/compat.h b/compat.h
index 531ba85..41655a6 100644
--- a/compat.h
+++ b/compat.h
@@ -65,10 +65,12 @@ 
 #if LINUX_VERSION_CODE < KERNEL_VERSION(3, 0, 0)
 
 #define kfree_rcu(ptr, rcu_head) call_rcu(&ptr->rcu_head, free_rcu_##ptr)
+#define vlan_insert_tag(skb, vid) __vlan_put_tag(skb, vid)
 
 void free_rcu_gw_node(struct rcu_head *rcu);
 void free_rcu_neigh_node(struct rcu_head *rcu);
 void free_rcu_tt_local_entry(struct rcu_head *rcu);
+void free_rcu_backbone_gw(struct rcu_head *rcu);
 
 #endif /* < KERNEL_VERSION(3, 0, 0) */
 
diff --git a/hard-interface.c b/hard-interface.c
index 7704df4..f64adff 100644
--- a/hard-interface.c
+++ b/hard-interface.c
@@ -29,6 +29,7 @@ 
 #include "originator.h"
 #include "hash.h"
 #include "bat_ogm.h"
+#include "bridge_loop_avoidance.h"
 
 #include <linux/if_arp.h>
 
@@ -123,6 +124,11 @@  static void primary_if_update_addr(struct bat_priv *bat_priv)
 	memcpy(vis_packet->sender_orig,
 	       primary_if->net_dev->dev_addr, ETH_ALEN);
 
+	/* before updating bat_priv->own_orig,
+	 * change own addresses in backbone gw */
+	bla_update_orig_address(bat_priv, primary_if->net_dev->dev_addr);
+	memcpy(bat_priv->own_orig, primary_if->net_dev->dev_addr, ETH_ALEN);
+
 out:
 	if (primary_if)
 		hardif_free_ref(primary_if);
@@ -634,7 +640,7 @@  static int batman_skb_recv(struct sk_buff *skb, struct net_device *dev,
 	case BAT_TT_QUERY:
 		ret = recv_tt_query(skb, hard_iface);
 		break;
-		/* Roaming advertisement */
+		/* bridge roop avoidance query */
 	case BAT_ROAM_ADV:
 		ret = recv_roam_adv(skb, hard_iface);
 		break;
diff --git a/main.c b/main.c
index 36661b4..85cb79b 100644
--- a/main.c
+++ b/main.c
@@ -30,6 +30,7 @@ 
 #include "translation-table.h"
 #include "hard-interface.h"
 #include "gateway_client.h"
+#include "bridge_loop_avoidance.h"
 #include "vis.h"
 #include "hash.h"
 
@@ -109,6 +110,9 @@  int mesh_init(struct net_device *soft_iface)
 	if (vis_init(bat_priv) < 1)
 		goto err;
 
+	if (bla_init(bat_priv) < 1)
+		goto err;
+
 	atomic_set(&bat_priv->gw_reselect, 0);
 	atomic_set(&bat_priv->mesh_state, MESH_ACTIVE);
 	goto end;
@@ -136,6 +140,8 @@  void mesh_free(struct net_device *soft_iface)
 
 	tt_free(bat_priv);
 
+	bla_free(bat_priv);
+
 	atomic_set(&bat_priv->mesh_state, MESH_INACTIVE);
 }
 
diff --git a/main.h b/main.h
index b395c7b..1bd41f1 100644
--- a/main.h
+++ b/main.h
@@ -79,6 +79,9 @@ 
 #define MAX_AGGREGATION_BYTES 512
 #define MAX_AGGREGATION_MS 100
 
+#define BLA_PERIOD_LENGTH	10000	/* 10 seconds */
+#define BLA_BACKBONE_TIMEOUT	(BLA_PERIOD_LENGTH * 3)
+#define BLA_CLAIM_TIMEOUT	(BLA_PERIOD_LENGTH * 10)
 /* don't reset again within 30 seconds */
 #define RESET_PROTECTION_MS 30000
 #define EXPECTED_SEQNO_RANGE	65536
@@ -118,7 +121,8 @@  enum dbg_level {
 	DBG_BATMAN = 1 << 0,
 	DBG_ROUTES = 1 << 1, /* route added / changed / deleted */
 	DBG_TT	   = 1 << 2, /* translation table operations */
-	DBG_ALL    = 7
+	DBG_BLA    = 1 << 3, /* bridge loop avoidance */
+	DBG_ALL    = 15
 };
 
 
diff --git a/originator.c b/originator.c
index a9c8b66..52e778a 100644
--- a/originator.c
+++ b/originator.c
@@ -28,6 +28,7 @@ 
 #include "hard-interface.h"
 #include "unicast.h"
 #include "soft-interface.h"
+#include "bridge_loop_avoidance.h"
 
 static void purge_orig(struct work_struct *work);
 
diff --git a/packet.h b/packet.h
index 4d9e54c..48c7b74 100644
--- a/packet.h
+++ b/packet.h
@@ -90,6 +90,22 @@  enum tt_client_flags {
 	TT_CLIENT_PENDING = 1 << 10
 };
 
+/* claim frame types for the bridge loop avoidance */
+enum bla_claimframe {
+	CLAIM_TYPE_ADD		= 0x00,
+	CLAIM_TYPE_DEL		= 0x01,
+	CLAIM_TYPE_ANNOUNCE	= 0x02,
+	CLAIM_TYPE_REQUEST	= 0x03
+};
+
+/* the destination hardware field in the ARP frame is used to
+ * transport the claim type and the group id */
+struct bla_claim_dst {
+	uint8_t magic[3];	/* FF:43:05 */
+	uint8_t type;		/* bla_claimframe */
+	uint16_t group;		/* group id */
+} __packed;
+
 struct batman_ogm_packet {
 	uint8_t  packet_type;
 	uint8_t  version;  /* batman version field */
diff --git a/routing.c b/routing.c
index ef24a72..8050fb2 100644
--- a/routing.c
+++ b/routing.c
@@ -30,6 +30,7 @@ 
 #include "vis.h"
 #include "unicast.h"
 #include "bat_ogm.h"
+#include "bridge_loop_avoidance.h"
 
 void slide_own_bcast_window(struct hard_iface *hard_iface)
 {
@@ -1074,6 +1075,11 @@  int recv_bcast_packet(struct sk_buff *skb, struct hard_iface *recv_if)
 	/* rebroadcast packet */
 	add_bcast_packet_to_list(bat_priv, skb, 1);
 
+	/* don't hand the broadcast up if it is from an originator
+	 * from the same backbone. */
+	if (bla_is_backbone_gw(skb, orig_node, hdr_size))
+		goto out;
+
 	/* broadcast for me */
 	interface_rx(recv_if->soft_iface, skb, recv_if, hdr_size);
 	ret = NET_RX_SUCCESS;
diff --git a/soft-interface.c b/soft-interface.c
index 4806deb..46dd328 100644
--- a/soft-interface.c
+++ b/soft-interface.c
@@ -36,6 +36,7 @@ 
 #include <linux/etherdevice.h>
 #include <linux/if_vlan.h>
 #include "unicast.h"
+#include "bridge_loop_avoidance.h"
 
 
 static int bat_get_settings(struct net_device *dev, struct ethtool_cmd *cmd);
@@ -151,6 +152,9 @@  static int interface_tx(struct sk_buff *skb, struct net_device *soft_iface)
 		goto dropped;
 	}
 
+	if (bla_tx(bat_priv, skb, vid))
+		goto dropped;
+
 	/* Register the client MAC in the transtable */
 	tt_local_add(soft_iface, ethhdr->h_source, skb->skb_iif);
 
@@ -286,6 +290,11 @@  void interface_rx(struct net_device *soft_iface,
 	if (is_ap_isolated(bat_priv, ethhdr->h_source, ethhdr->h_dest))
 		goto dropped;
 
+	/* Let the bridge loop avoidance check the packet. If will
+	 * not handle it, we can safely push it up. */
+	if (bla_rx(bat_priv, skb, vid))
+		goto out;
+
 	netif_rx(skb);
 	goto out;
 
@@ -355,6 +364,7 @@  struct net_device *softif_create(const char *name)
 
 	atomic_set(&bat_priv->aggregated_ogms, 1);
 	atomic_set(&bat_priv->bonding, 0);
+	atomic_set(&bat_priv->bridge_loop_avoidance, 1);
 	atomic_set(&bat_priv->ap_isolation, 0);
 	atomic_set(&bat_priv->vis_mode, VIS_TYPE_CLIENT_UPDATE);
 	atomic_set(&bat_priv->gw_mode, GW_MODE_OFF);
@@ -372,6 +382,7 @@  struct net_device *softif_create(const char *name)
 	atomic_set(&bat_priv->ttvn, 0);
 	atomic_set(&bat_priv->tt_local_changes, 0);
 	atomic_set(&bat_priv->tt_ogm_append_cnt, 0);
+	atomic_set(&bat_priv->bla_num_requests, 0);
 
 	bat_priv->tt_buff = NULL;
 	bat_priv->tt_buff_len = 0;
diff --git a/types.h b/types.h
index db6ae2b..3cffe8d 100644
--- a/types.h
+++ b/types.h
@@ -147,6 +147,7 @@  struct bat_priv {
 	atomic_t bonding;		/* boolean */
 	atomic_t fragmentation;		/* boolean */
 	atomic_t ap_isolation;		/* boolean */
+	atomic_t bridge_loop_avoidance;	/* boolean */
 	atomic_t vis_mode;		/* VIS_TYPE_* */
 	atomic_t gw_mode;		/* GW_MODE_* */
 	atomic_t gw_sel_class;		/* uint */
@@ -160,6 +161,7 @@  struct bat_priv {
 	atomic_t ttvn; /* translation table version number */
 	atomic_t tt_ogm_append_cnt;
 	atomic_t tt_local_changes; /* changes registered in a OGM interval */
+	atomic_t bla_num_requests; /* number of bla requests in flight */
 	/* The tt_poss_change flag is used to detect an ongoing roaming phase.
 	 * If true, then I received a Roaming_adv and I have to inspect every
 	 * packet directed to me to check whether I am still the true
@@ -178,6 +180,8 @@  struct bat_priv {
 	struct hashtable_t *orig_hash;
 	struct hashtable_t *tt_local_hash;
 	struct hashtable_t *tt_global_hash;
+	struct hashtable_t *claim_hash;
+	struct hashtable_t *backbone_hash;
 	struct list_head tt_req_list; /* list of pending tt_requests */
 	struct list_head tt_roam_list;
 	struct hashtable_t *vis_hash;
@@ -198,10 +202,12 @@  struct bat_priv {
 	struct delayed_work tt_work;
 	struct delayed_work orig_work;
 	struct delayed_work vis_work;
+	struct delayed_work bla_work;
 	struct gw_node __rcu *curr_gw;  /* rcu protected pointer */
 	atomic_t gw_reselect;
 	struct hard_iface __rcu *primary_if;  /* rcu protected pointer */
 	struct vis_info *my_vis_info;
+	uint8_t own_orig[6];		/* cache primary hardifs address */
 };
 
 struct socket_client {
@@ -239,6 +245,28 @@  struct tt_global_entry {
 	struct rcu_head rcu;
 };
 
+struct backbone_gw {
+	uint8_t orig[ETH_ALEN];
+	short vid;		/* used VLAN ID */
+	struct hlist_node hash_entry;
+	struct bat_priv *bat_priv;
+	unsigned long lasttime;	/* last time we heard of this backbone gw */
+	atomic_t request_sent;
+	atomic_t refcount;
+	struct rcu_head rcu;
+	uint16_t crc;		/* crc checksum over all claims */
+} __packed;
+
+struct claim {
+	uint8_t addr[ETH_ALEN];
+	short vid;
+	struct backbone_gw *backbone_gw;
+	unsigned long lasttime;	/* last time we heard of claim (locals only) */
+	struct rcu_head rcu;
+	atomic_t refcount;
+	struct hlist_node hash_entry;
+} __packed;
+
 struct tt_change_node {
 	struct list_head list;
 	struct tt_change change;