From patchwork Wed Oct 27 08:02:25 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sven Eckelmann X-Patchwork-Id: 446 Return-Path: Received: from mail.gmx.net (mailout-de.gmx.net [213.165.64.23]) by open-mesh.org (Postfix) with SMTP id 9A11E15454C for ; Wed, 27 Oct 2010 10:01:09 +0200 (CEST) Received: (qmail invoked by alias); 27 Oct 2010 08:01:06 -0000 Received: from unknown (EHLO sven-desktop.lazhur.ath.cx) [89.246.192.82] by mail.gmx.net (mp021) with SMTP; 27 Oct 2010 10:01:06 +0200 X-Authenticated: #15668376 X-Provags-ID: V01U2FsdGVkX19NZhg0iVpNQqb3VLNiLM8vQ2f+G8gNrvNHtIT0jC soV2QgY+kSHRgO From: Sven Eckelmann To: b.a.t.m.a.n@lists.open-mesh.org Date: Wed, 27 Oct 2010 10:02:25 +0200 Message-Id: <1288166545-8252-1-git-send-email-sven.eckelmann@gmx.de> X-Mailer: git-send-email 1.7.2.3 X-Y-GMX-Trusted: 0 Subject: [B.A.T.M.A.N.] [PATCH] batman-adv: Correct sparse warning about different lock contexts for basic block X-BeenThere: b.a.t.m.a.n@lists.open-mesh.org X-Mailman-Version: 2.1.11 Precedence: list Reply-To: The list for a Better Approach To Mobile Ad-hoc Networking List-Id: The list for a Better Approach To Mobile Ad-hoc Networking List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 Oct 2010 08:01:09 -0000 sparse noticed that if_list_lock is taken in some situations with bottom halfes disabled in some SMP kernels and that we must always lock it with spin_lock_bh. Signed-off-by: Sven Eckelmann --- batman-adv/hard-interface.c | 12 ++++++------ 1 files changed, 6 insertions(+), 6 deletions(-) diff --git a/batman-adv/hard-interface.c b/batman-adv/hard-interface.c index 37f0f8b..b5514e3 100644 --- a/batman-adv/hard-interface.c +++ b/batman-adv/hard-interface.c @@ -441,9 +441,9 @@ static struct batman_if *hardif_add_interface(struct net_device *net_dev) check_known_mac_addr(batman_if->net_dev->dev_addr); - spin_lock(&if_list_lock); + spin_lock_bh(&if_list_lock); list_add_tail_rcu(&batman_if->list, &if_list); - spin_unlock(&if_list_lock); + spin_unlock_bh(&if_list_lock); /* extra reference for return */ kref_get(&batman_if->refcount); @@ -478,11 +478,11 @@ void hardif_remove_interfaces(void) struct batman_if *batman_if, *batman_if_tmp; rtnl_lock(); - spin_lock(&if_list_lock); + spin_lock_bh(&if_list_lock); list_for_each_entry_safe(batman_if, batman_if_tmp, &if_list, list) { hardif_remove_interface(batman_if); } - spin_unlock(&if_list_lock); + spin_unlock_bh(&if_list_lock); rtnl_unlock(); } @@ -508,9 +508,9 @@ static int hard_if_event(struct notifier_block *this, hardif_deactivate_interface(batman_if); break; case NETDEV_UNREGISTER: - spin_lock(&if_list_lock); + spin_lock_bh(&if_list_lock); hardif_remove_interface(batman_if); - spin_unlock(&if_list_lock); + spin_unlock_bh(&if_list_lock); break; case NETDEV_CHANGEMTU: if (batman_if->soft_iface)