Discussion:
New Defects reported by Coverity Scan for ceph (fwd)
Sage Weil
2014-09-25 15:02:09 UTC
Permalink
John Spray
2014-09-25 15:27:44 UTC
Permalink
Nice to see that coverity and lockdep agree :-)

This should go away with the fix for #9562.

John
---------- Forwarded message ----------
To: undisclosed-recipients:;
Date: Thu, 25 Sep 2014 06:18:46 -0700
Subject: New Defects reported by Coverity Scan for ceph
Hi,
Please find the latest report on new defect(s) introduced to ceph found with Coverity Scan.
Defect(s) Reported-by: Coverity Scan
Showing 1 of 1 defect(s)
** CID 1241497: Thread deadlock (ORDER_REVERSAL)
________________________________________________________________________________________________________
*** CID 1241497: Thread deadlock (ORDER_REVERSAL)
/osdc/Filer.cc: 314 in Filer::_do_purge_range(PurgeRange *, int)()
308 return;
309 }
310
311 int max = 10 - pr->uncommitted;
312 while (pr->num > 0 && max > 0) {
313 object_t oid = file_object_t(pr->ino, pr->first);
CID 1241497: Thread deadlock (ORDER_REVERSAL)
Calling "get_osdmap_read" acquires lock "RWLock.L" while holding lock "Mutex._m" (count: 15 / 30).
314 const OSDMap *osdmap = objecter->get_osdmap_read();
315 object_locator_t oloc = osdmap->file_to_object_locator(pr->layout);
316 objecter->put_osdmap_read();
317 objecter->remove(oid, oloc, pr->snapc, pr->mtime, pr->flags,
318 NULL, new C_PurgeRange(this, pr));
319 pr->uncommitted++;
________________________________________________________________________________________________________
To view the defects in Coverity Scan visit, http://scan.coverity.com/projects/25?tab=overview
To unsubscribe from the email notification for new defects, http://scan5.coverity.com/cgi-bin/unsubscribe.py
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sage Weil
2014-09-30 13:59:35 UTC
Permalink
Looks like recent changes from Greg, Loic, and I.
Loic Dachary
2014-09-30 17:26:35 UTC
Permalink
I'll fix the aio.cc problems, thanks !
Post by Sage Weil
Looks like recent changes from Greg, Loic, and I.
--
Loïc Dachary, Artisan Logiciel Libre
Gregory Farnum
2014-09-30 17:36:41 UTC
Permalink
Post by Sage Weil
Looks like recent changes from Greg, Loic, and I.
---------- Forwarded message ----------
To: undisclosed-recipients:;
Date: Tue, 30 Sep 2014 06:21:08 -0700
Subject: New Defects reported by Coverity Scan for ceph
Hi,
Please find the latest report on new defect(s) introduced to ceph found with Coverity Scan.
Defect(s) Reported-by: Coverity Scan
Showing 4 of 4 defect(s)
** CID 1242019: Data race condition (MISSING_LOCK)
/msg/Pipe.cc: 230 in Pipe::DelayedDelivery::entry()()
** CID 1242021: Resource leak (RESOURCE_LEAK)
/test/librados/tier.cc: 1026 in LibRadosTwoPoolsPP_EvictSnap2_Test::TestBody()()
/test/librados/tier.cc: 1022 in LibRadosTwoPoolsPP_EvictSnap2_Test::TestBody()()
/test/librados/tier.cc: 1040 in LibRadosTwoPoolsPP_EvictSnap2_Test::TestBody()()
/test/librados/tier.cc: 1037 in LibRadosTwoPoolsPP_EvictSnap2_Test::TestBody()()
** CID 1242020: Resource leak (RESOURCE_LEAK)
/test/librados/aio.cc: 168 in LibRadosAio_TooBig_Test::TestBody()()
** CID 1242018: Resource leak (RESOURCE_LEAK)
/test/librados/aio.cc: 188 in LibRadosAio_TooBigPP_Test::TestBody()()
/test/librados/aio.cc: 190 in LibRadosAio_TooBigPP_Test::TestBody()()
/test/librados/aio.cc: 187 in LibRadosAio_TooBigPP_Test::TestBody()()
________________________________________________________________________________________________________
*** CID 1242019: Data race condition (MISSING_LOCK)
/msg/Pipe.cc: 230 in Pipe::DelayedDelivery::entry()()
224 if (flush_count > 0) {
225 --flush_count;
226 active_flush = true;
227 }
228 if (pipe->in_q->can_fast_dispatch(m)) {
229 if (!stop_fast_dispatching_flag) {
CID 1242019: Data race condition (MISSING_LOCK)
Accessing "this->delay_dispatching" without holding lock "Mutex._m". Elsewhere, "_ZN4Pipe15DelayedDeliveryE.delay_dispatching" is accessed with "Mutex._m" held 1 out of 2 times (1 of these accesses strongly imply that it is necessary).
230 delay_dispatching = true;
231 delay_lock.Unlock();
232 pipe->in_q->fast_dispatch(m);
233 delay_lock.Lock();
234 delay_dispatching = false;
235 if (stop_fast_dispatching_flag) {
This one's a false positive. (delay_dispatching is protected by the
delay_lock, but I think it's picking up on the Pipe::lock which is
held when DelayedDelivery is constructed and initialized.) Is there a
way I should annotate this, or is it something we need to adjust in
the Coverity web interface?
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Sage Weil
2014-09-30 17:38:03 UTC
Permalink
Post by Gregory Farnum
Post by Sage Weil
Looks like recent changes from Greg, Loic, and I.
---------- Forwarded message ----------
To: undisclosed-recipients:;
Date: Tue, 30 Sep 2014 06:21:08 -0700
Subject: New Defects reported by Coverity Scan for ceph
Hi,
Please find the latest report on new defect(s) introduced to ceph found with Coverity Scan.
Defect(s) Reported-by: Coverity Scan
Showing 4 of 4 defect(s)
** CID 1242019: Data race condition (MISSING_LOCK)
/msg/Pipe.cc: 230 in Pipe::DelayedDelivery::entry()()
** CID 1242021: Resource leak (RESOURCE_LEAK)
/test/librados/tier.cc: 1026 in LibRadosTwoPoolsPP_EvictSnap2_Test::TestBody()()
/test/librados/tier.cc: 1022 in LibRadosTwoPoolsPP_EvictSnap2_Test::TestBody()()
/test/librados/tier.cc: 1040 in LibRadosTwoPoolsPP_EvictSnap2_Test::TestBody()()
/test/librados/tier.cc: 1037 in LibRadosTwoPoolsPP_EvictSnap2_Test::TestBody()()
** CID 1242020: Resource leak (RESOURCE_LEAK)
/test/librados/aio.cc: 168 in LibRadosAio_TooBig_Test::TestBody()()
** CID 1242018: Resource leak (RESOURCE_LEAK)
/test/librados/aio.cc: 188 in LibRadosAio_TooBigPP_Test::TestBody()()
/test/librados/aio.cc: 190 in LibRadosAio_TooBigPP_Test::TestBody()()
/test/librados/aio.cc: 187 in LibRadosAio_TooBigPP_Test::TestBody()()
________________________________________________________________________________________________________
*** CID 1242019: Data race condition (MISSING_LOCK)
/msg/Pipe.cc: 230 in Pipe::DelayedDelivery::entry()()
224 if (flush_count > 0) {
225 --flush_count;
226 active_flush = true;
227 }
228 if (pipe->in_q->can_fast_dispatch(m)) {
229 if (!stop_fast_dispatching_flag) {
Post by Sage Weil
CID 1242019: Data race condition (MISSING_LOCK)
Accessing "this->delay_dispatching" without holding lock "Mutex._m". Elsewhere, "_ZN4Pipe15DelayedDeliveryE.delay_dispatching" is accessed with "Mutex._m" held 1 out of 2 times (1 of these accesses strongly imply that it is necessary).
230 delay_dispatching = true;
231 delay_lock.Unlock();
232 pipe->in_q->fast_dispatch(m);
233 delay_lock.Lock();
234 delay_dispatching = false;
235 if (stop_fast_dispatching_flag) {
This one's a false positive. (delay_dispatching is protected by the
delay_lock, but I think it's picking up on the Pipe::lock which is
held when DelayedDelivery is constructed and initialized.) Is there a
way I should annotate this, or is it something we need to adjust in
the Coverity web interface?
There are annotations but I don't know how they work. I've been marking
them through the web interface...

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Ric Wheeler
2014-09-30 17:41:46 UTC
Permalink
Post by Sage Weil
Post by Gregory Farnum
Post by Sage Weil
Looks like recent changes from Greg, Loic, and I.
---------- Forwarded message ----------
To: undisclosed-recipients:;
Date: Tue, 30 Sep 2014 06:21:08 -0700
Subject: New Defects reported by Coverity Scan for ceph
Hi,
Please find the latest report on new defect(s) introduced to ceph found with Coverity Scan.
Defect(s) Reported-by: Coverity Scan
Showing 4 of 4 defect(s)
** CID 1242019: Data race condition (MISSING_LOCK)
/msg/Pipe.cc: 230 in Pipe::DelayedDelivery::entry()()
** CID 1242021: Resource leak (RESOURCE_LEAK)
/test/librados/tier.cc: 1026 in LibRadosTwoPoolsPP_EvictSnap2_Test::TestBody()()
/test/librados/tier.cc: 1022 in LibRadosTwoPoolsPP_EvictSnap2_Test::TestBody()()
/test/librados/tier.cc: 1040 in LibRadosTwoPoolsPP_EvictSnap2_Test::TestBody()()
/test/librados/tier.cc: 1037 in LibRadosTwoPoolsPP_EvictSnap2_Test::TestBody()()
** CID 1242020: Resource leak (RESOURCE_LEAK)
/test/librados/aio.cc: 168 in LibRadosAio_TooBig_Test::TestBody()()
** CID 1242018: Resource leak (RESOURCE_LEAK)
/test/librados/aio.cc: 188 in LibRadosAio_TooBigPP_Test::TestBody()()
/test/librados/aio.cc: 190 in LibRadosAio_TooBigPP_Test::TestBody()()
/test/librados/aio.cc: 187 in LibRadosAio_TooBigPP_Test::TestBody()()
________________________________________________________________________________________________________
*** CID 1242019: Data race condition (MISSING_LOCK)
/msg/Pipe.cc: 230 in Pipe::DelayedDelivery::entry()()
224 if (flush_count > 0) {
225 --flush_count;
226 active_flush = true;
227 }
228 if (pipe->in_q->can_fast_dispatch(m)) {
229 if (!stop_fast_dispatching_flag) {
Post by Sage Weil
CID 1242019: Data race condition (MISSING_LOCK)
Accessing "this->delay_dispatching" without holding lock "Mutex._m". Elsewhere, "_ZN4Pipe15DelayedDeliveryE.delay_dispatching" is accessed with "Mutex._m" held 1 out of 2 times (1 of these accesses strongly imply that it is necessary).
230 delay_dispatching = true;
231 delay_lock.Unlock();
232 pipe->in_q->fast_dispatch(m);
233 delay_lock.Lock();
234 delay_dispatching = false;
235 if (stop_fast_dispatching_flag) {
This one's a false positive. (delay_dispatching is protected by the
delay_lock, but I think it's picking up on the Pipe::lock which is
held when DelayedDelivery is constructed and initialized.) Is there a
way I should annotate this, or is it something we need to adjust in
the Coverity web interface?
There are annotations but I don't know how they work. I've been marking
them through the web interface...
sage
Jeff and Kaleb (last I remember) had more expertise in coverity magic - they
might know how to annotate those false positives...

ric

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to ***@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Loading...