batman-adv: fix locking in hash_add()

Message ID ebf1a5f57c2b4069b177913c3a7ee3d7e0b8db3d.1336509029.git.mschiffer@universe-factory.net (mailing list archive)
State Accepted, archived
Commit 41c587435a5bde3331addf256e9f5ddea0bde8c5
Headers

Commit Message

Matthias Schiffer May 8, 2012, 8:31 p.m. UTC
  To ensure an entry isn't added twice all comparisons have to be protected by the
hash line write spinlock. This doesn't really hurt as the case that it is tried
to add an element already present to the hash shouldn't occur very often, so in
most cases the lock would have have to be taken anyways.

Signed-off-by: Matthias Schiffer <mschiffer@universe-factory.net>
---
 hash.h |   15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)
  

Comments

Sven Eckelmann May 8, 2012, 8:38 p.m. UTC | #1
On Tuesday 08 May 2012 22:31:57 Matthias Schiffer wrote:
> To ensure an entry isn't added twice all comparisons have to be protected by
> the hash line write spinlock. This doesn't really hurt as the case that it
> is tried to add an element already present to the hash shouldn't occur very
> often, so in most cases the lock would have have to be taken anyways.
> 
> Signed-off-by: Matthias Schiffer <mschiffer@universe-factory.net>
> ---
>  hash.h |   15 ++++++---------
>  1 file changed, 6 insertions(+), 9 deletions(-)
> 
> diff --git a/hash.h b/hash.h
> index 7bcb98f..2e0409a 100644
> --- a/hash.h
> +++ b/hash.h
> @@ -109,26 +109,23 @@ static inline int hash_add(struct hashtable_t *hash,
>  	head = &hash->table[index];
>  	list_lock = &hash->list_locks[index];
> 
> -	rcu_read_lock();
> -	__hlist_for_each_rcu(node, head) {
> +	spin_lock_bh(list_lock);
> +
> +	hlist_for_each(node, head) {
>  		if (!compare(node, data))
>  			continue;
> 
>  		ret = 1;
> -		goto err_unlock;
> +		goto unlock;
>  	}
> -	rcu_read_unlock();
> 
>  	/* no duplicate found in list, add new element */
> -	spin_lock_bh(list_lock);
>  	hlist_add_head_rcu(data_node, head);
> -	spin_unlock_bh(list_lock);
> 
>  	ret = 0;
> -	goto out;
> 
> -err_unlock:
> -	rcu_read_unlock();
> +unlock:
> +	spin_unlock_bh(list_lock);
>  out:
>  	return ret;
>  }

Acked-by: Sven Eckelmann <sven@narfation.org>

Thanks,
	Sven
  
Marek Lindner May 11, 2012, 7:50 p.m. UTC | #2
On Wednesday, May 09, 2012 04:38:58 Sven Eckelmann wrote:
> Details
> 
>   On Tuesday 08 May 2012 22:31:57 Matthias Schiffer wrote:
> > To ensure an entry isn't added twice all comparisons have to be protected
> > by the hash line write spinlock. This doesn't really hurt as the case
> > that it is tried to add an element already present to the hash shouldn't
> > occur very often, so in most cases the lock would have have to be taken
> > anyways.
> >
> > 
> >[..]
> >
> > -err_unlock:
> > -     rcu_read_unlock();
> > +unlock:
> > +     spin_unlock_bh(list_lock);
> >
> >  out:
> >       return ret;
> >  }
> 
> Acked-by: Sven Eckelmann <sven@narfation.org>

Applied in revision 41c5874.

Thanks,
Marek
  

Patch

diff --git a/hash.h b/hash.h
index 7bcb98f..2e0409a 100644
--- a/hash.h
+++ b/hash.h
@@ -109,26 +109,23 @@  static inline int hash_add(struct hashtable_t *hash,
 	head = &hash->table[index];
 	list_lock = &hash->list_locks[index];
 
-	rcu_read_lock();
-	__hlist_for_each_rcu(node, head) {
+	spin_lock_bh(list_lock);
+
+	hlist_for_each(node, head) {
 		if (!compare(node, data))
 			continue;
 
 		ret = 1;
-		goto err_unlock;
+		goto unlock;
 	}
-	rcu_read_unlock();
 
 	/* no duplicate found in list, add new element */
-	spin_lock_bh(list_lock);
 	hlist_add_head_rcu(data_node, head);
-	spin_unlock_bh(list_lock);
 
 	ret = 0;
-	goto out;
 
-err_unlock:
-	rcu_read_unlock();
+unlock:
+	spin_unlock_bh(list_lock);
 out:
 	return ret;
 }