diff options
author | Somasundaram S <somasundaram@nvidia.com> | 2014-05-13 20:24:17 +0530 |
---|---|---|
committer | Mrutyunjay Sawant <msawant@nvidia.com> | 2014-06-04 06:12:57 -0700 |
commit | 53b962c333cfe81fece15426a64229ad6a679230 (patch) | |
tree | 459811dda9d8b326c90eaa37cd9562d400af42a3 /drivers/media/platform/tegra/nvavp/nvavp_dev.c | |
parent | 778f7433d27fcdb13f44358b29324ec45ec7e950 (diff) |
media: tegra: nvavp: Fix possible deadlock issue
Running the following with lockdep patches:
su
stop
stop media
results in:
[ 121.879482] ======================================================
[ 121.885792] [ INFO: possible circular locking dependency detected ]
[ 121.892103] 3.10.33-g3a639d14292b-dirty #995 Tainted: G W
[ 121.898392] -------------------------------------------------------
[ 121.904684] Binder_2/862 is trying to acquire lock:
[ 121.909578] ((&nvavp->clock_disable_work)){+.+...}, at:
[<ffffffc0000c5a74>] flush_work+0x0/0x288
[ 121.918632]
[ 121.918632] but task is already holding lock:
[ 121.924484] (&nvavp->open_lock){+.+.+.}, at:
[<ffffffc00065b868>] tegra_nvavp_video_release+0x2c/0x5c
[ 121.933908]
[ 121.933908] which lock already depends on the new lock.
[ 121.933908]
[ 121.942118]
[ 121.942118] the existing dependency chain (in reverse order) is:
[ 121.949625]
-> #2 (&nvavp->open_lock){+.+.+.}:
[ 121.954311] [<ffffffc0001019b0>] __lock_acquire+0x22a8/0x2358
[ 121.960708] [<ffffffc000102234>] lock_acquire+0x98/0x12c
[ 121.966663] [<ffffffc0009fa4a0>] mutex_lock_nested+0x78/0x3b4
[ 121.973057] [<ffffffc00065b624>] clock_disable_handler+0x30/0x9c
[ 121.979710] [<ffffffc0000c47e4>] process_one_work+0x190/0x4e8
[ 121.986101] [<ffffffc0000c4c74>] worker_thread+0x138/0x3c0
[ 121.992231] [<ffffffc0000cbcf0>] kthread+0xd0/0xdc
[ 121.997718] [<ffffffc000084cbc>] ret_from_fork+0xc/0x1c
[ 122.003607]
-> #1 (&nvavp->channel_info[channel_id].pushbuffer_lock){+.+...}:
[ 122.010989] [<ffffffc0001019b0>] __lock_acquire+0x22a8/0x2358
[ 122.017382] [<ffffffc000102234>] lock_acquire+0x98/0x12c
[ 122.023340] [<ffffffc0009fa4a0>] mutex_lock_nested+0x78/0x3b4
[ 122.029730] [<ffffffc00065b618>] clock_disable_handler+0x24/0x9c
[ 122.036398] [<ffffffc0000c47e4>] process_one_work+0x190/0x4e8
[ 122.042795] [<ffffffc0000c4c74>] worker_thread+0x138/0x3c0
[ 122.048932] [<ffffffc0000cbcf0>] kthread+0xd0/0xdc
[ 122.054367] [<ffffffc000084cbc>] ret_from_fork+0xc/0x1c
[ 122.060237]
-> #0 ((&nvavp->clock_disable_work)){+.+...}:
[ 122.065874] [<ffffffc0009f23ac>] print_circular_bug+0x6c/0x2f8
[ 122.072352] [<ffffffc0001014a4>] __lock_acquire+0x1d9c/0x2358
[ 122.078742] [<ffffffc000102234>] lock_acquire+0x98/0x12c
[ 122.084694] [<ffffffc0000c5ab0>] flush_work+0x3c/0x288
[ 122.090472] [<ffffffc0000c5d80>] __cancel_work_timer+0x84/0x12c
[ 122.097034] [<ffffffc0000c5e34>] cancel_work_sync+0xc/0x18
[ 122.103160] [<ffffffc00065b348>] nvavp_uninit+0x68/0x25c
[ 122.109113] [<ffffffc00065b76c>] tegra_nvavp_release+0xdc/0x150
[ 122.115689] [<ffffffc00065b874>] tegra_nvavp_video_release+0x38/0x5c
[ 122.122692] [<ffffffc000196578>] __fput+0xac/0x228
[ 122.128126] [<ffffffc0001967a8>] ____fput+0x8/0x14
[ 122.133556] [<ffffffc0000c8cfc>] task_work_run+0xc8/0x100
[ 122.139596] [<ffffffc0000abb1c>] do_exit+0x29c/0x998
[ 122.145203] [<ffffffc0000ac280>] do_group_exit+0x38/0xcc
[ 122.151156] [<ffffffc0000bb454>] get_signal_to_deliver+0x2bc/0x600
[ 122.157981] [<ffffffc000087c7c>] do_signal+0x238/0x564
[ 122.163760] [<ffffffc00008819c>] do_notify_resume+0x24/0x5c
[ 122.169974] [<ffffffc000084c20>] work_pending+0x18/0x20
[ 122.175841]
[ 122.175841] other info that might help us debug this:
[ 122.175841]
[ 122.196330] Possible unsafe locking scenario:
[ 122.196330]
[ 122.202269] CPU0 CPU1
[ 122.206812] ---- ----
[ 122.211355] lock(&nvavp->open_lock);
[ 122.215139] lock(&nvavp->channel_info[channel_id].pushbuffer_lock);
[ 122.224146] lock(&nvavp->open_lock);
[ 122.230454] lock((&nvavp->clock_disable_work));
[ 122.235195]
[ 122.235195] *** DEADLOCK ***
Re-order lock sequence in clock_disable_handler to ensure open_lock
is always acquired before acquiring pushbuffer_lock as a possible fix
for the above deadlock scenario.
Bug 1512083
Change-Id: If0ffdd3f1e53baf9599f8cfbb47e48a285817e9e
Signed-off-by: Somasundaram S <somasundaram@nvidia.com>
Reviewed-on: http://git-master/r/408844
Reviewed-by: Mrutyunjay Sawant <msawant@nvidia.com>
Tested-by: Mrutyunjay Sawant <msawant@nvidia.com>
Diffstat (limited to 'drivers/media/platform/tegra/nvavp/nvavp_dev.c')
-rw-r--r-- | drivers/media/platform/tegra/nvavp/nvavp_dev.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/media/platform/tegra/nvavp/nvavp_dev.c b/drivers/media/platform/tegra/nvavp/nvavp_dev.c index 24ade15976af..be6c0e269a99 100644 --- a/drivers/media/platform/tegra/nvavp/nvavp_dev.c +++ b/drivers/media/platform/tegra/nvavp/nvavp_dev.c @@ -614,8 +614,8 @@ static void clock_disable_handler(struct work_struct *work) clock_disable_work); channel_info = nvavp_get_channel_info(nvavp, NVAVP_VIDEO_CHANNEL); - mutex_lock(&channel_info->pushbuffer_lock); mutex_lock(&nvavp->open_lock); + mutex_lock(&channel_info->pushbuffer_lock); if (nvavp_check_idle(nvavp, NVAVP_VIDEO_CHANNEL) && nvavp->pending) { nvavp->pending = false; nvavp_clks_disable(nvavp); |