From patchwork Thu Sep 16 20:18:35 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sven Eckelmann X-Patchwork-Id: 379 Return-Path: Received: from mail.gmx.net (mailout-de.gmx.net [213.165.64.23]) by open-mesh.org (Postfix) with SMTP id 0BDFF1543C1 for ; Thu, 16 Sep 2010 22:17:54 +0200 (CEST) Received: (qmail invoked by alias); 16 Sep 2010 20:17:52 -0000 Received: from unknown (EHLO sven-desktop.lazhur.ath.cx) [89.246.214.232] by mail.gmx.net (mp025) with SMTP; 16 Sep 2010 22:17:52 +0200 X-Authenticated: #15668376 X-Provags-ID: V01U2FsdGVkX1/rSqxJD67rotpAoGCfUPXGfYEbdy0rSRXaHFaB4j PxMgWxJ84zqbbk From: Sven Eckelmann To: b.a.t.m.a.n@lists.open-mesh.net Date: Thu, 16 Sep 2010 22:18:35 +0200 Message-Id: <1284668317-19890-3-git-send-email-sven.eckelmann@gmx.de> X-Mailer: git-send-email 1.7.2.3 In-Reply-To: <1284668317-19890-1-git-send-email-sven.eckelmann@gmx.de> References: <1284668317-19890-1-git-send-email-sven.eckelmann@gmx.de> X-Y-GMX-Trusted: 0 Subject: [B.A.T.M.A.N.] [PATCH 2/4] batman-adv: Protect update side of gw_list X-BeenThere: b.a.t.m.a.n@lists.open-mesh.org X-Mailman-Version: 2.1.11 Precedence: list Reply-To: The list for a Better Approach To Mobile Ad-hoc Networking List-Id: The list for a Better Approach To Mobile Ad-hoc Networking List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 16 Sep 2010 20:17:54 -0000 hlist_add_head_rcu must be protected using gw_list_lock of the current bat_priv like already done with hlist_del_rcu. It is important that this lock is now always done using spin_lock_irqsave because gw_node_add can also be called indirectly from parts of the kernel with interrupts disabled. Signed-off-by: Sven Eckelmann --- batman-adv/gateway_client.c | 13 +++++++++---- 1 files changed, 9 insertions(+), 4 deletions(-) diff --git a/batman-adv/gateway_client.c b/batman-adv/gateway_client.c index 6721398..dd96d99 100644 --- a/batman-adv/gateway_client.c +++ b/batman-adv/gateway_client.c @@ -196,6 +196,7 @@ static void gw_node_add(struct bat_priv *bat_priv, { struct gw_node *gw_node; int down, up; + unsigned long flags; gw_node = kmalloc(sizeof(struct gw_node), GFP_ATOMIC); if (!gw_node) @@ -205,7 +206,9 @@ static void gw_node_add(struct bat_priv *bat_priv, INIT_HLIST_NODE(&gw_node->list); gw_node->orig_node = orig_node; + spin_lock_irqsave(&bat_priv->gw_list_lock, flags); hlist_add_head_rcu(&gw_node->list, &bat_priv->gw_list); + spin_unlock_irqrestore(&bat_priv->gw_list_lock, flags); gw_srv_class_to_kbit(new_gwflags, &down, &up); bat_dbg(DBG_BATMAN, bat_priv, @@ -273,8 +276,9 @@ void gw_node_purge_deleted(struct bat_priv *bat_priv) struct gw_node *gw_node; struct hlist_node *node, *node_tmp; unsigned long timeout = 2 * PURGE_TIMEOUT * HZ; + unsigned long flags; - spin_lock(&bat_priv->gw_list_lock); + spin_lock_irqsave(&bat_priv->gw_list_lock, flags); hlist_for_each_entry_safe(gw_node, node, node_tmp, &bat_priv->gw_list, list) { @@ -286,15 +290,16 @@ void gw_node_purge_deleted(struct bat_priv *bat_priv) } } - spin_unlock(&bat_priv->gw_list_lock); + spin_unlock_irqrestore(&bat_priv->gw_list_lock, flags); } void gw_node_list_free(struct bat_priv *bat_priv) { struct gw_node *gw_node; struct hlist_node *node, *node_tmp; + unsigned long flags; - spin_lock(&bat_priv->gw_list_lock); + spin_lock_irqsave(&bat_priv->gw_list_lock, flags); hlist_for_each_entry_safe(gw_node, node, node_tmp, &bat_priv->gw_list, list) { @@ -303,7 +308,7 @@ void gw_node_list_free(struct bat_priv *bat_priv) } gw_deselect(bat_priv); - spin_unlock(&bat_priv->gw_list_lock); + spin_unlock_irqrestore(&bat_priv->gw_list_lock, flags); } static int _write_buffer_text(struct bat_priv *bat_priv,