linux工作队列 - workqueue总览——代码分析_worker_die-程序员宅基地

技术标签: linux_workqueue  

http://blog.csdn.net/cc289123557/article/details/52833989


文章系列

1.linux工作队列 - workqueue总览
2.linux工作队列 - workqueue_struct创建
3.linux工作队列 - 把work_struct加入工作队列
4.linux工作队列 - work_struct被调用过程

work_struct被调用过程

work_struct被调用在函数worker_thread中进行,代码如下:

static int worker_thread(void *__worker)
{
    struct worker *worker = __worker;
    struct worker_pool *pool = worker->pool;

    /* tell the scheduler that this is a workqueue worker */
    worker->task->flags |= PF_WQ_WORKER;
woke_up:
    spin_lock_irq(&pool->lock);

    /* am I supposed to die? */
    if (unlikely(worker->flags & WORKER_DIE)) {
   ----------判断worker是否die,是的话就从worker-pool中删除并返回,不是的话往下继续执行
        spin_unlock_irq(&pool->lock);
        WARN_ON_ONCE(!list_empty(&worker->entry));
        worker->task->flags &= ~PF_WQ_WORKER;

        set_task_comm(worker->task, "kworker/dying");
        ida_simple_remove(&pool->worker_ida, worker->id);
        worker_detach_from_pool(worker, pool);
        kfree(worker);
        return 0;
    }

    worker_leave_idle(worker);
recheck:
    /* no more worker necessary? */
    if (!need_more_worker(pool))----------判断是否有work有待执行,没有的话sleep
        goto sleep;

    /* do we need to manage? */
    if (unlikely(!may_start_working(pool)) && manage_workers(worker))
        goto recheck;

    /*
     * ->scheduled list can only be filled while a worker is
     * preparing to process a work or actually processing it.
     * Make sure nobody diddled with it while I was sleeping.
     */
    WARN_ON_ONCE(!list_empty(&worker->scheduled));

    /*
     * Finish PREP stage.  We're guaranteed to have at least one idle
     * worker or that someone else has already assumed the manager
     * role.  This is where @worker starts participating in concurrency
     * management if applicable and concurrency management is restored
     * after being rebound.  See rebind_workers() for details.
     */
    worker_clr_flags(worker, WORKER_PREP | WORKER_REBOUND);

    do {
   ---------------------------------------循环执行每一个work
        struct work_struct *work =
            list_first_entry(&pool->worklist,
                     struct work_struct, entry);

        pool->watchdog_ts = jiffies;

        if (likely(!(*work_data_bits(work) & WORK_STRUCT_LINKED))) {
            /* optimization path, not strictly necessary */
            process_one_work(worker, work);
            if (unlikely(!list_empty(&worker->scheduled)))
                process_scheduled_works(worker);
        } else {
            move_linked_works(work, &worker->scheduled, NULL);-----把work加入到worker的scheduled中
            process_scheduled_works(worker);----------在此执行scheduled中的work
        }
    } while (keep_working(pool));

    worker_set_flags(worker, WORKER_PREP);
sleep:
    /*
     * pool->lock is held and there's no work to process and no need to
     * manage, sleep.  Workers are woken up only while holding
     * pool->lock or from local cpu, so setting the current state
     * before releasing pool->lock is enough to prevent losing any
     * event.
     */
    worker_enter_idle(worker);
    __set_current_state(TASK_INTERRUPTIBLE);
    spin_unlock_irq(&pool->lock);
    schedule();----------------------------------执行调度
    goto woke_up;
}

 
 
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83

process_scheduled_works函数会调用函数process_one_work以真正执行work中的函数:

static void process_one_work(struct worker *worker, struct work_struct *work)
__releases(&pool->lock)
__acquires(&pool->lock)
{
    struct pool_workqueue *pwq = get_work_pwq(work);
    struct worker_pool *pool = worker->pool;
    bool cpu_intensive = pwq->wq->flags & WQ_CPU_INTENSIVE;
    int work_color;
    struct worker *collision;
#ifdef CONFIG_LOCKDEP
    /*
     * It is permissible to free the struct work_struct from
     * inside the function that is called from it, this we need to
     * take into account for lockdep too.  To avoid bogus "held
     * lock freed" warnings as well as problems when looking into
     * work->lockdep_map, make a copy and use that here.
     */
    struct lockdep_map lockdep_map;

    lockdep_copy_map(&lockdep_map, &work->lockdep_map);
#endif
    /* ensure we're on the correct CPU */
    WARN_ON_ONCE(!(pool->flags & POOL_DISASSOCIATED) &&
             raw_smp_processor_id() != pool->cpu);

    /*
     * A single work shouldn't be executed concurrently by
     * multiple workers on a single cpu.  Check whether anyone is
     * already processing the work.  If so, defer the work to the
     * currently executing one.
     */
    collision = find_worker_executing_work(pool, work);
    if (unlikely(collision)) {
        move_linked_works(work, &collision->scheduled, NULL);
        return;
    }

    /* claim and dequeue */
    debug_work_deactivate(work);
    hash_add(pool->busy_hash, &worker->hentry, (unsigned long)work);
    worker->current_work = work;
    worker->current_func = work->func;--------func转移到worker中
    worker->current_pwq = pwq;
    work_color = get_work_color(work);

    list_del_init(&work->entry);--------------把work从worker pool中删除

    /*
     * CPU intensive works don't participate in concurrency management.
     * They're the scheduler's responsibility.  This takes @worker out
     * of concurrency management and the next code block will chain
     * execution of the pending work items.
     */
    if (unlikely(cpu_intensive))
        worker_set_flags(worker, WORKER_CPU_INTENSIVE);

    /*
     * Wake up another worker if necessary.  The condition is always
     * false for normal per-cpu workers since nr_running would always
     * be >= 1 at this point.  This is used to chain execution of the
     * pending work items for WORKER_NOT_RUNNING workers such as the
     * UNBOUND and CPU_INTENSIVE ones.
     */
    if (need_more_worker(pool))
        wake_up_worker(pool);

    /*
     * Record the last pool and clear PENDING which should be the last
     * update to @work.  Also, do this inside @pool->lock so that
     * PENDING and queued state changes happen together while IRQ is
     * disabled.
     */
    set_work_pool_and_clear_pending(work, pool->id);

    spin_unlock_irq(&pool->lock);

    lock_map_acquire_read(&pwq->wq->lockdep_map);
    lock_map_acquire(&lockdep_map);
    trace_workqueue_execute_start(work);
    worker->current_func(work);------------------执行work->func
    /*
     * While we must be careful to not use "work" after this, the trace
     * point will only record its address.
     */
    trace_workqueue_execute_end(work);
    lock_map_release(&lockdep_map);
    lock_map_release(&pwq->wq->lockdep_map);

    if (unlikely(in_atomic() || lockdep_depth(current) > 0)) {
        pr_err("BUG: workqueue leaked lock or atomic: %s/0x%08x/%d\n"
               "     last function: %pf\n",
               current->comm, preempt_count(), task_pid_nr(current),
               worker->current_func);
        debug_show_held_locks(current);
        dump_stack();
    }

    /*
     * The following prevents a kworker from hogging CPU on !PREEMPT
     * kernels, where a requeueing work item waiting for something to
     * happen could deadlock with stop_machine as such work item could
     * indefinitely requeue itself while all other CPUs are trapped in
     * stop_machine. At the same time, report a quiescent RCU state so
     * the same condition doesn't freeze RCU.
     */
    cond_resched_rcu_qs();

    spin_lock_irq(&pool->lock);

    /* clear cpu intensive status */
    if (unlikely(cpu_intensive))
        worker_clr_flags(worker, WORKER_CPU_INTENSIVE);

    /* we're done with it, release */
    hash_del(&worker->hentry);
    worker->current_work = NULL;
    worker->current_func = NULL;
    worker->current_pwq = NULL;
    worker->desc_valid = false;
    pwq_dec_nr_in_flight(pwq, work_color);
}
 
 
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
版权声明:本文为博主原创文章,未经博主允许不得转载。

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/STN_LCD/article/details/78846481

智能推荐

Qt C++ 实现无边框窗体自定义缩放和拖动_qt无边框窗口缩放-程序员宅基地

文章浏览阅读1.8k次。在本文中,我们使用Qt C++中创建了一个无边框窗体,并实现自定义缩放和拖动功能。我们利用标志隐藏了窗体的边框,并通过事件过滤器监听了窗体和顶部栏的事件,从而实现了窗口的拖动和缩放功能。我们还通过辅助函数判断鼠标所处的边缘区域,并设置相应的鼠标样式,提供了直观的用户反馈。_qt无边框窗口缩放

在Mac上通过ssh连接谷歌云上的服务器实例并使用SFTP方式上传文件_mac ssh sftp-程序员宅基地

文章浏览阅读1.3k次。1. 在Mac上通过ssh连接谷歌云上的服务器实例(1)先从本地mac电脑中通过一段简单的命令获得钥匙ssh-keygen -t rsa -f ~/.ssh/google_sem_key(生成key的文件名) -C **(服务器的用户名) -b 2048执行命令会,会让你输入并确认密码,这里直接确认即可然后输入以下命令进入.ssh目录并用ls命令列出当前目录下的文件内容cat google_sem_key.pub你会找到一大串乱码,复制下来。(2)登录谷歌云账户,_mac ssh sftp

vue + element实现el-date-picker的时间格式转换,以及自定义时间格式,修改输入的时间格式_el-date-picker组件 实现输入20240120 转化成20240120 支持手动输入还支-程序员宅基地

文章浏览阅读3.9k次。实现日期选择输入框的时间格式转换、根据数据的情况自动化格式为需要的格式,如果只是需要修改传给后端的值或者格式,可以使用 value-format实现,可以在文档上查看详细的介绍。_el-date-picker组件 实现输入20240120 转化成20240120 支持手动输入还支持时间框

TINYMCE实现从WORD直接粘贴并自动上传图片-程序员宅基地

文章浏览阅读150次,点赞9次,收藏4次。Tinymce编辑器实现从word直接粘贴并自动上传图片,一键粘贴word内容,支持快捷键粘贴(Ctrl+V)并自动上传图片,一键导入word文件,粘贴后文字和图片自动添加到编辑器中,图片自动上传到服务器中,服务器位置可以自定义,自己指定,图片存储接口也能够自定义。关于tinymce粘贴图片,粘贴word,一键导入word,粘贴word内容,网上能找到的方案不是特别多,都是通过HTML5提供的API来实现的。功能上来讲的确是非常方便,对文字,内容,新闻编辑工作来讲,能够大幅度提高效率。

超牛逼的几款轻量级笔记软件!-程序员宅基地

文章浏览阅读5.5k次。点击关注公众号,回复“1024”获取2TB学习资源!编程容易产生挫折,即使作为一种业余爱好也可能是这样。建立一个网页,手机APP或桌面应用都是个很大的工程,好的记笔记技能是让这个工程井然有..._轻量级记事本

随便推点

机器学习之超参数优化 - 网格优化方法(随机网格搜索)_网格搜索参数优化-程序员宅基地

文章浏览阅读5.4k次,点赞4次,收藏32次。机器学习之超参数优化 - 网格优化方法(随机网格搜索)_网格搜索参数优化

Lumina网络进入SDN市场-程序员宅基地

文章浏览阅读72次。通过从博科通信系统公司收购SDN控制器产品系列的相关资产,Lumina网络公司今天正式进入软件定义网络(SDN)市场。除了采用OpenDaylight?领先的SDN控制器解决方案,Lumina还带来一个由网络软件工程师组成的才华横溢的团队,以及包括全球一些最大的电讯服务提供商在内的现有客户。Lumina提供Lumina SDN控制器、应用和网络开发(N..._lumina sdn控制器

python引用传递的区别_php传值引用的区别-程序员宅基地

文章浏览阅读57次。PHP传值与传址(引用)传值和传引用的区别在于,如果一个参数比较大,占用大量的内存空间,那么传引用的话就会节省拷贝空间。传值:是把实参的值赋值给行参 ,那么对行参的修改,不会影响实参的值传引用 :真正的以地址的方式传递参数传递以后,行参和实参都是同一个对象,只是他们名字不同而已对行参的修改将影响实参的值说明:1....文章技术小哥哥2017-11-13632浏览量PHP常见面试题大全php中传值与..._python有跟php一样的引用传递吗

《TCP/IP详解 卷2》 笔记: 简介_tcpip详解卷二有必要看吗-程序员宅基地

文章浏览阅读7.4k次,点赞3次,收藏8次。 《TCP/IP详解 卷2》讲述的是4.4BSD-Lite(1994年发布的一个BSD操作系统的发行版)的TCP/IP协议栈源代码,之后许多Unix和非Unix(包括Linux)操作系统的网络协议栈的实现都参考了它。 这本书将近900页,讲述了约15000行的代码。这是我第二次阅读如此大篇幅的源代码讲解的书,前前后后,断断续续地花了几个月的时间。我并不是所有的内容都阅读过了,我只关注..._tcpip详解卷二有必要看吗

饺子播放器Jzvd使用过程中遇到的问题汇总-程序员宅基地

文章浏览阅读332次,点赞6次,收藏8次。继续看第一个方法第四行开始,毫无疑问,作者获取getDecorView()根视图,然后将原来的对象放入了跟视图,同时隐藏了状态栏,也即是开启了全屏模式,汗,简单粗暴。从上方代码第三行跳转到下面这个方法,可以看出作者用构造器构造了一个新的对象替代了原来位置的对象。还不够,那如何从全屏模式回来呢,作者在打开全屏的时候,就将原视图的父容器存放到了一个链表中。

python- flask current_app详解,与 current_app._get_current_object()的区别以及异步发送邮件实例-程序员宅基地

文章浏览阅读3.6k次,点赞4次,收藏8次。核心知识AppContext手动、自动入栈LocalStack是线程隔离的栈结构current_app是线程、协程隔离对象LocalProxy是获取当前线程隔离的代理对象一、flask中经典错误 working outside application context错误:working outside application contex原因:在没有获取到应用上下文的情况下,进行了上下文操作。代码:from flask import Flask, current_appapp =_flask current_app

推荐文章

热门文章

相关标签