Advertisement
Guest User

Untitled

a guest
Jul 9th, 2017
579
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 20.12 KB | None | 0 0
  1. From: Mike Galbraith <efault@gmx.de>
  2. Date: Sat, 20 Nov 2010 12:35:00 -0700
  3. Subject: [PATCH] sched: Improve desktop interactivity: Implement automated per session task groups
  4.  
  5. A recurring complaint from CFS users is that parallel kbuild has a negative
  6. impact on desktop interactivity. This patch implements an idea from Linus,
  7. to automatically create task groups. Currently, only per session autogroups
  8. are implemented, but the patch leaves the way open for enhancement.
  9.  
  10. Implementation: each task's signal struct contains an inherited pointer to
  11. a refcounted autogroup struct containing a task group pointer, the default
  12. for all tasks pointing to the init_task_group. When a task calls setsid(),
  13. a new task group is created, the process is moved into the new task group,
  14. and a reference to the preveious task group is dropped. Child processes
  15. inherit this task group thereafter, and increase it's refcount. When the
  16. last thread of a process exits, the process's reference is dropped, such
  17. that when the last process referencing an autogroup exits, the autogroup
  18. is destroyed.
  19.  
  20. At runqueue selection time, IFF a task has no cgroup assignment, its current
  21. autogroup is used.
  22.  
  23. Autogroup bandwidth is controllable via setting it's nice level through the
  24. proc filesystem. cat /proc/<pid>/autogroup displays the task's group and the
  25. group's nice level. echo <nice level> > /proc/<pid>/autogroup Sets the task
  26. group's shares to the weight of nice <level> task. Setting nice level is rate
  27. limited for !admin users due to the abuse risk of task group locking.
  28.  
  29. The feature is enabled from boot by default if CONFIG_SCHED_AUTOGROUP=y is
  30. selected, but can be disabled via the boot option noautogroup, and can also
  31. be turned on/off on the fly via..
  32. echo [01] > /proc/sys/kernel/sched_autogroup_enabled.
  33. ..which will automatically move tasks to/from the root task group.
  34.  
  35. Signed-off-by: Mike Galbraith <efault@gmx.de>
  36. Cc: Oleg Nesterov <oleg@redhat.com>
  37. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
  38. Cc: Linus Torvalds <torvalds@linux-foundation.org>
  39. Cc: Markus Trippelsdorf <markus@trippelsdorf.de>
  40. Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
  41. LKML-Reference: <1290281700.28711.9.camel@maggy.simson.net>
  42. Signed-off-by: Ingo Molnar <mingo@elte.hu>
  43. ---
  44. Documentation/kernel-parameters.txt | 2
  45. fs/proc/base.c | 79 ++++++++++++
  46. include/linux/sched.h | 23 +++
  47. init/Kconfig | 12 +
  48. kernel/fork.c | 5
  49. kernel/sched.c | 13 +
  50. kernel/sched_autogroup.c | 235 ++++++++++++++++++++++++++++++++++++
  51. kernel/sched_autogroup.h | 32 ++++
  52. kernel/sched_debug.c | 29 ++--
  53. kernel/sys.c | 4
  54. kernel/sysctl.c | 11 +
  55. 11 files changed, 427 insertions(+), 18 deletions(-)
  56.  
  57. Index: linux-2.6.36/include/linux/sched.h
  58. ===================================================================
  59. --- linux-2.6.36.orig/include/linux/sched.h
  60. +++ linux-2.6.36/include/linux/sched.h
  61. @@ -506,6 +506,8 @@ struct thread_group_cputimer {
  62. spinlock_t lock;
  63. };
  64.  
  65. +struct autogroup;
  66. +
  67. /*
  68. * NOTE! "signal_struct" does not have it's own
  69. * locking, because a shared signal_struct always
  70. @@ -573,6 +575,9 @@ struct signal_struct {
  71.  
  72. struct tty_struct *tty; /* NULL if no tty */
  73.  
  74. +#ifdef CONFIG_SCHED_AUTOGROUP
  75. + struct autogroup *autogroup;
  76. +#endif
  77. /*
  78. * Cumulative resource counters for dead threads in the group,
  79. * and for reaped dead child processes forked by this group.
  80. @@ -1900,6 +1905,24 @@ int sched_rt_handler(struct ctl_table *t
  81.  
  82. extern unsigned int sysctl_sched_compat_yield;
  83.  
  84. +#ifdef CONFIG_SCHED_AUTOGROUP
  85. +extern unsigned int sysctl_sched_autogroup_enabled;
  86. +
  87. +extern void sched_autogroup_create_attach(struct task_struct *p);
  88. +extern void sched_autogroup_detach(struct task_struct *p);
  89. +extern void sched_autogroup_fork(struct signal_struct *sig);
  90. +extern void sched_autogroup_exit(struct signal_struct *sig);
  91. +#ifdef CONFIG_PROC_FS
  92. +extern void proc_sched_autogroup_show_task(struct task_struct *p, struct seq_file *m);
  93. +extern int proc_sched_autogroup_set_nice(struct task_struct *p, int *nice);
  94. +#endif
  95. +#else
  96. +static inline void sched_autogroup_create_attach(struct task_struct *p) { }
  97. +static inline void sched_autogroup_detach(struct task_struct *p) { }
  98. +static inline void sched_autogroup_fork(struct signal_struct *sig) { }
  99. +static inline void sched_autogroup_exit(struct signal_struct *sig) { }
  100. +#endif
  101. +
  102. #ifdef CONFIG_RT_MUTEXES
  103. extern int rt_mutex_getprio(struct task_struct *p);
  104. extern void rt_mutex_setprio(struct task_struct *p, int prio);
  105. Index: linux-2.6.36/kernel/sched.c
  106. ===================================================================
  107. --- linux-2.6.36.orig/kernel/sched.c
  108. +++ linux-2.6.36/kernel/sched.c
  109. @@ -78,6 +78,7 @@
  110.  
  111. #include "sched_cpupri.h"
  112. #include "workqueue_sched.h"
  113. +#include "sched_autogroup.h"
  114.  
  115. #define CREATE_TRACE_POINTS
  116. #include <trace/events/sched.h>
  117. @@ -268,6 +269,10 @@ struct task_group {
  118. struct task_group *parent;
  119. struct list_head siblings;
  120. struct list_head children;
  121. +
  122. +#ifdef CONFIG_SCHED_AUTOGROUP
  123. + struct autogroup *autogroup;
  124. +#endif
  125. };
  126.  
  127. #define root_task_group init_task_group
  128. @@ -612,11 +617,14 @@ static inline int cpu_of(struct rq *rq)
  129. */
  130. static inline struct task_group *task_group(struct task_struct *p)
  131. {
  132. + struct task_group *tg;
  133. struct cgroup_subsys_state *css;
  134.  
  135. css = task_subsys_state_check(p, cpu_cgroup_subsys_id,
  136. lockdep_is_held(&task_rq(p)->lock));
  137. - return container_of(css, struct task_group, css);
  138. + tg = container_of(css, struct task_group, css);
  139. +
  140. + return autogroup_task_group(p, tg);
  141. }
  142.  
  143. /* Change a task's cfs_rq and parent entity if it moves across CPUs/groups */
  144. @@ -1913,6 +1921,7 @@ static void deactivate_task(struct rq *r
  145. #include "sched_idletask.c"
  146. #include "sched_fair.c"
  147. #include "sched_rt.c"
  148. +#include "sched_autogroup.c"
  149. #ifdef CONFIG_SCHED_DEBUG
  150. # include "sched_debug.c"
  151. #endif
  152. @@ -7742,7 +7751,7 @@ void __init sched_init(void)
  153. #ifdef CONFIG_CGROUP_SCHED
  154. list_add(&init_task_group.list, &task_groups);
  155. INIT_LIST_HEAD(&init_task_group.children);
  156. -
  157. + autogroup_init(&init_task);
  158. #endif /* CONFIG_CGROUP_SCHED */
  159.  
  160. #if defined CONFIG_FAIR_GROUP_SCHED && defined CONFIG_SMP
  161. Index: linux-2.6.36/kernel/fork.c
  162. ===================================================================
  163. --- linux-2.6.36.orig/kernel/fork.c
  164. +++ linux-2.6.36/kernel/fork.c
  165. @@ -173,8 +173,10 @@ static inline void free_signal_struct(st
  166.  
  167. static inline void put_signal_struct(struct signal_struct *sig)
  168. {
  169. - if (atomic_dec_and_test(&sig->sigcnt))
  170. + if (atomic_dec_and_test(&sig->sigcnt)) {
  171. + sched_autogroup_exit(sig);
  172. free_signal_struct(sig);
  173. + }
  174. }
  175.  
  176. void __put_task_struct(struct task_struct *tsk)
  177. @@ -900,6 +902,7 @@ static int copy_signal(unsigned long clo
  178. posix_cpu_timers_init_group(sig);
  179.  
  180. tty_audit_fork(sig);
  181. + sched_autogroup_fork(sig);
  182.  
  183. sig->oom_adj = current->signal->oom_adj;
  184. sig->oom_score_adj = current->signal->oom_score_adj;
  185. Index: linux-2.6.36/kernel/sys.c
  186. ===================================================================
  187. --- linux-2.6.36.orig/kernel/sys.c
  188. +++ linux-2.6.36/kernel/sys.c
  189. @@ -1080,8 +1080,10 @@ SYSCALL_DEFINE0(setsid)
  190. err = session;
  191. out:
  192. write_unlock_irq(&tasklist_lock);
  193. - if (err > 0)
  194. + if (err > 0) {
  195. proc_sid_connector(group_leader);
  196. + sched_autogroup_create_attach(group_leader);
  197. + }
  198. return err;
  199. }
  200.  
  201. Index: linux-2.6.36/kernel/sched_debug.c
  202. ===================================================================
  203. --- linux-2.6.36.orig/kernel/sched_debug.c
  204. +++ linux-2.6.36/kernel/sched_debug.c
  205. @@ -87,6 +87,20 @@ static void print_cfs_group_stats(struct
  206. }
  207. #endif
  208.  
  209. +#if defined(CONFIG_CGROUP_SCHED) && \
  210. + (defined(CONFIG_FAIR_GROUP_SCHED) || defined(CONFIG_RT_GROUP_SCHED))
  211. +static void task_group_path(struct task_group *tg, char *buf, int buflen)
  212. +{
  213. + /* may be NULL if the underlying cgroup isn't fully-created yet */
  214. + if (!tg->css.cgroup) {
  215. + if (!autogroup_path(tg, buf, buflen))
  216. + buf[0] = '\0';
  217. + return;
  218. + }
  219. + cgroup_path(tg->css.cgroup, buf, buflen);
  220. +}
  221. +#endif
  222. +
  223. static void
  224. print_task(struct seq_file *m, struct rq *rq, struct task_struct *p)
  225. {
  226. @@ -115,7 +129,7 @@ print_task(struct seq_file *m, struct rq
  227. char path[64];
  228.  
  229. rcu_read_lock();
  230. - cgroup_path(task_group(p)->css.cgroup, path, sizeof(path));
  231. + task_group_path(task_group(p), path, sizeof(path));
  232. rcu_read_unlock();
  233. SEQ_printf(m, " %s", path);
  234. }
  235. @@ -147,19 +161,6 @@ static void print_rq(struct seq_file *m,
  236. read_unlock_irqrestore(&tasklist_lock, flags);
  237. }
  238.  
  239. -#if defined(CONFIG_CGROUP_SCHED) && \
  240. - (defined(CONFIG_FAIR_GROUP_SCHED) || defined(CONFIG_RT_GROUP_SCHED))
  241. -static void task_group_path(struct task_group *tg, char *buf, int buflen)
  242. -{
  243. - /* may be NULL if the underlying cgroup isn't fully-created yet */
  244. - if (!tg->css.cgroup) {
  245. - buf[0] = '\0';
  246. - return;
  247. - }
  248. - cgroup_path(tg->css.cgroup, buf, buflen);
  249. -}
  250. -#endif
  251. -
  252. void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
  253. {
  254. s64 MIN_vruntime = -1, min_vruntime, max_vruntime = -1,
  255. Index: linux-2.6.36/fs/proc/base.c
  256. ===================================================================
  257. --- linux-2.6.36.orig/fs/proc/base.c
  258. +++ linux-2.6.36/fs/proc/base.c
  259. @@ -1359,6 +1359,82 @@ static const struct file_operations proc
  260.  
  261. #endif
  262.  
  263. +#ifdef CONFIG_SCHED_AUTOGROUP
  264. +/*
  265. + * Print out autogroup related information:
  266. + */
  267. +static int sched_autogroup_show(struct seq_file *m, void *v)
  268. +{
  269. + struct inode *inode = m->private;
  270. + struct task_struct *p;
  271. +
  272. + p = get_proc_task(inode);
  273. + if (!p)
  274. + return -ESRCH;
  275. + proc_sched_autogroup_show_task(p, m);
  276. +
  277. + put_task_struct(p);
  278. +
  279. + return 0;
  280. +}
  281. +
  282. +static ssize_t
  283. +sched_autogroup_write(struct file *file, const char __user *buf,
  284. + size_t count, loff_t *offset)
  285. +{
  286. + struct inode *inode = file->f_path.dentry->d_inode;
  287. + struct task_struct *p;
  288. + char buffer[PROC_NUMBUF];
  289. + long nice;
  290. + int err;
  291. +
  292. + memset(buffer, 0, sizeof(buffer));
  293. + if (count > sizeof(buffer) - 1)
  294. + count = sizeof(buffer) - 1;
  295. + if (copy_from_user(buffer, buf, count))
  296. + return -EFAULT;
  297. +
  298. + err = strict_strtol(strstrip(buffer), 0, &nice);
  299. + if (err)
  300. + return -EINVAL;
  301. +
  302. + p = get_proc_task(inode);
  303. + if (!p)
  304. + return -ESRCH;
  305. +
  306. + err = nice;
  307. + err = proc_sched_autogroup_set_nice(p, &err);
  308. + if (err)
  309. + count = err;
  310. +
  311. + put_task_struct(p);
  312. +
  313. + return count;
  314. +}
  315. +
  316. +static int sched_autogroup_open(struct inode *inode, struct file *filp)
  317. +{
  318. + int ret;
  319. +
  320. + ret = single_open(filp, sched_autogroup_show, NULL);
  321. + if (!ret) {
  322. + struct seq_file *m = filp->private_data;
  323. +
  324. + m->private = inode;
  325. + }
  326. + return ret;
  327. +}
  328. +
  329. +static const struct file_operations proc_pid_sched_autogroup_operations = {
  330. + .open = sched_autogroup_open,
  331. + .read = seq_read,
  332. + .write = sched_autogroup_write,
  333. + .llseek = seq_lseek,
  334. + .release = single_release,
  335. +};
  336. +
  337. +#endif /* CONFIG_SCHED_AUTOGROUP */
  338. +
  339. static ssize_t comm_write(struct file *file, const char __user *buf,
  340. size_t count, loff_t *offset)
  341. {
  342. @@ -2679,6 +2755,9 @@ static const struct pid_entry tgid_base_
  343. #ifdef CONFIG_SCHED_DEBUG
  344. REG("sched", S_IRUGO|S_IWUSR, proc_pid_sched_operations),
  345. #endif
  346. +#ifdef CONFIG_SCHED_AUTOGROUP
  347. + REG("autogroup", S_IRUGO|S_IWUSR, proc_pid_sched_autogroup_operations),
  348. +#endif
  349. REG("comm", S_IRUGO|S_IWUSR, proc_pid_set_comm_operations),
  350. #ifdef CONFIG_HAVE_ARCH_TRACEHOOK
  351. INF("syscall", S_IRUSR, proc_pid_syscall),
  352. Index: linux-2.6.36/kernel/sched_autogroup.h
  353. ===================================================================
  354. --- /dev/null
  355. +++ linux-2.6.36/kernel/sched_autogroup.h
  356. @@ -0,0 +1,32 @@
  357. +#ifdef CONFIG_SCHED_AUTOGROUP
  358. +
  359. +struct autogroup {
  360. + struct kref kref;
  361. + struct task_group *tg;
  362. + struct rw_semaphore lock;
  363. + unsigned long id;
  364. + int nice;
  365. +};
  366. +
  367. +static inline struct task_group *
  368. +autogroup_task_group(struct task_struct *p, struct task_group *tg);
  369. +
  370. +#else /* !CONFIG_SCHED_AUTOGROUP */
  371. +
  372. +static inline void autogroup_init(struct task_struct *init_task) { }
  373. +static inline void autogroup_free(struct task_group *tg) { }
  374. +
  375. +static inline struct task_group *
  376. +autogroup_task_group(struct task_struct *p, struct task_group *tg)
  377. +{
  378. + return tg;
  379. +}
  380. +
  381. +#ifdef CONFIG_SCHED_DEBUG
  382. +static inline int autogroup_path(struct task_group *tg, char *buf, int buflen)
  383. +{
  384. + return 0;
  385. +}
  386. +#endif
  387. +
  388. +#endif /* CONFIG_SCHED_AUTOGROUP */
  389. Index: linux-2.6.36/kernel/sched_autogroup.c
  390. ===================================================================
  391. --- /dev/null
  392. +++ linux-2.6.36/kernel/sched_autogroup.c
  393. @@ -0,0 +1,235 @@
  394. +#ifdef CONFIG_SCHED_AUTOGROUP
  395. +
  396. +#include <linux/proc_fs.h>
  397. +#include <linux/seq_file.h>
  398. +#include <linux/kallsyms.h>
  399. +#include <linux/utsname.h>
  400. +
  401. +unsigned int __read_mostly sysctl_sched_autogroup_enabled = 1;
  402. +static struct autogroup autogroup_default;
  403. +static atomic_t autogroup_seq_nr;
  404. +
  405. +static void autogroup_init(struct task_struct *init_task)
  406. +{
  407. + autogroup_default.tg = &init_task_group;
  408. + init_task_group.autogroup = &autogroup_default;
  409. + kref_init(&autogroup_default.kref);
  410. + init_rwsem(&autogroup_default.lock);
  411. + init_task->signal->autogroup = &autogroup_default;
  412. +}
  413. +
  414. +static inline void autogroup_free(struct task_group *tg)
  415. +{
  416. + kfree(tg->autogroup);
  417. +}
  418. +
  419. +static inline void autogroup_destroy(struct kref *kref)
  420. +{
  421. + struct autogroup *ag = container_of(kref, struct autogroup, kref);
  422. +
  423. + sched_destroy_group(ag->tg);
  424. +}
  425. +
  426. +static inline void autogroup_kref_put(struct autogroup *ag)
  427. +{
  428. + kref_put(&ag->kref, autogroup_destroy);
  429. +}
  430. +
  431. +static inline struct autogroup *autogroup_kref_get(struct autogroup *ag)
  432. +{
  433. + kref_get(&ag->kref);
  434. + return ag;
  435. +}
  436. +
  437. +static inline struct autogroup *autogroup_create(void)
  438. +{
  439. + struct autogroup *ag = kzalloc(sizeof(*ag), GFP_KERNEL);
  440. + struct task_group *tg;
  441. +
  442. + if (!ag)
  443. + goto out_fail;
  444. +
  445. + tg = sched_create_group(&init_task_group);
  446. +
  447. + if (IS_ERR(tg))
  448. + goto out_free;
  449. +
  450. + kref_init(&ag->kref);
  451. + init_rwsem(&ag->lock);
  452. + ag->id = atomic_inc_return(&autogroup_seq_nr);
  453. + ag->tg = tg;
  454. + tg->autogroup = ag;
  455. +
  456. + return ag;
  457. +
  458. +out_free:
  459. + kfree(ag);
  460. +out_fail:
  461. + if (printk_ratelimit()) {
  462. + printk(KERN_WARNING "autogroup_create: %s failure.\n",
  463. + ag ? "sched_create_group()" : "kmalloc()");
  464. + }
  465. +
  466. + return autogroup_kref_get(&autogroup_default);
  467. +}
  468. +
  469. +static inline bool
  470. +task_wants_autogroup(struct task_struct *p, struct task_group *tg)
  471. +{
  472. + if (tg != &root_task_group)
  473. + return false;
  474. +
  475. + if (p->sched_class != &fair_sched_class)
  476. + return false;
  477. +
  478. + /*
  479. + * We can only assume the task group can't go away on us if
  480. + * autogroup_move_group() can see us on ->thread_group list.
  481. + */
  482. + if (p->flags & PF_EXITING)
  483. + return false;
  484. +
  485. + return true;
  486. +}
  487. +
  488. +static inline struct task_group *
  489. +autogroup_task_group(struct task_struct *p, struct task_group *tg)
  490. +{
  491. + int enabled = ACCESS_ONCE(sysctl_sched_autogroup_enabled);
  492. +
  493. + if (enabled && task_wants_autogroup(p, tg))
  494. + return p->signal->autogroup->tg;
  495. +
  496. + return tg;
  497. +}
  498. +
  499. +static void
  500. +autogroup_move_group(struct task_struct *p, struct autogroup *ag)
  501. +{
  502. + struct autogroup *prev;
  503. + struct task_struct *t;
  504. + unsigned long flags;
  505. +
  506. + BUG_ON(!lock_task_sighand(p, &flags));
  507. +
  508. + prev = p->signal->autogroup;
  509. + if (prev == ag) {
  510. + unlock_task_sighand(p, &flags);
  511. + return;
  512. + }
  513. +
  514. + p->signal->autogroup = autogroup_kref_get(ag);
  515. + smp_mb();
  516. +
  517. + t = p;
  518. + do {
  519. + sched_move_task(t);
  520. + } while_each_thread(p, t);
  521. +
  522. + unlock_task_sighand(p, &flags);
  523. + autogroup_kref_put(prev);
  524. +}
  525. +
  526. +/* Allocates GFP_KERNEL, cannot be called under any spinlock */
  527. +void sched_autogroup_create_attach(struct task_struct *p)
  528. +{
  529. + struct autogroup *ag = autogroup_create();
  530. +
  531. + autogroup_move_group(p, ag);
  532. + /* drop extra refrence added by autogroup_create() */
  533. + autogroup_kref_put(ag);
  534. +}
  535. +EXPORT_SYMBOL(sched_autogroup_create_attach);
  536. +
  537. +/* Cannot be called under siglock. Currently has no users */
  538. +void sched_autogroup_detach(struct task_struct *p)
  539. +{
  540. + autogroup_move_group(p, &autogroup_default);
  541. +}
  542. +EXPORT_SYMBOL(sched_autogroup_detach);
  543. +
  544. +void sched_autogroup_fork(struct signal_struct *sig)
  545. +{
  546. + struct task_struct *p = current;
  547. +
  548. + spin_lock_irq(&p->sighand->siglock);
  549. + sig->autogroup = autogroup_kref_get(p->signal->autogroup);
  550. + spin_unlock_irq(&p->sighand->siglock);
  551. +}
  552. +
  553. +void sched_autogroup_exit(struct signal_struct *sig)
  554. +{
  555. + struct autogroup *ag;
  556. +
  557. + rcu_read_lock();
  558. + ag = rcu_dereference(sig->autogroup);
  559. + rcu_read_unlock();
  560. + autogroup_kref_put(ag);
  561. +}
  562. +
  563. +static int __init setup_autogroup(char *str)
  564. +{
  565. + sysctl_sched_autogroup_enabled = 0;
  566. +
  567. + return 1;
  568. +}
  569. +
  570. +__setup("noautogroup", setup_autogroup);
  571. +
  572. +#ifdef CONFIG_PROC_FS
  573. +
  574. +/* Called with siglock held. */
  575. +int proc_sched_autogroup_set_nice(struct task_struct *p, int *nice)
  576. +{
  577. + static unsigned long next = INITIAL_JIFFIES;
  578. + struct autogroup *ag;
  579. + int err;
  580. +
  581. + if (*nice < -20 || *nice > 19)
  582. + return -EINVAL;
  583. +
  584. + err = security_task_setnice(current, *nice);
  585. + if (err)
  586. + return err;
  587. +
  588. + if (*nice < 0 && !can_nice(current, *nice))
  589. + return -EPERM;
  590. +
  591. + /* this is a heavy operation taking global locks.. */
  592. + if (!capable(CAP_SYS_ADMIN) && time_before(jiffies, next))
  593. + return -EAGAIN;
  594. +
  595. + next = HZ / 10 + jiffies;
  596. + ag = autogroup_kref_get(p->signal->autogroup);
  597. +
  598. + down_write(&ag->lock);
  599. + err = sched_group_set_shares(ag->tg, prio_to_weight[*nice + 20]);
  600. + if (!err)
  601. + ag->nice = *nice;
  602. + up_write(&ag->lock);
  603. +
  604. + autogroup_kref_put(ag);
  605. +
  606. + return err;
  607. +}
  608. +
  609. +void proc_sched_autogroup_show_task(struct task_struct *p, struct seq_file *m)
  610. +{
  611. + struct autogroup *ag = autogroup_kref_get(p->signal->autogroup);
  612. +
  613. + down_read(&ag->lock);
  614. + seq_printf(m, "/autogroup-%ld nice %d\n", ag->id, ag->nice);
  615. + up_read(&ag->lock);
  616. +
  617. + autogroup_kref_put(ag);
  618. +}
  619. +#endif /* CONFIG_PROC_FS */
  620. +
  621. +#ifdef CONFIG_SCHED_DEBUG
  622. +static inline int autogroup_path(struct task_group *tg, char *buf, int buflen)
  623. +{
  624. + return snprintf(buf, buflen, "%s-%ld", "/autogroup", tg->autogroup->id);
  625. +}
  626. +#endif /* CONFIG_SCHED_DEBUG */
  627. +
  628. +#endif /* CONFIG_SCHED_AUTOGROUP */
  629. Index: linux-2.6.36/kernel/sysctl.c
  630. ===================================================================
  631. --- linux-2.6.36.orig/kernel/sysctl.c
  632. +++ linux-2.6.36/kernel/sysctl.c
  633. @@ -384,6 +384,17 @@ static struct ctl_table kern_table[] = {
  634. .mode = 0644,
  635. .proc_handler = proc_dointvec,
  636. },
  637. +#ifdef CONFIG_SCHED_AUTOGROUP
  638. + {
  639. + .procname = "sched_autogroup_enabled",
  640. + .data = &sysctl_sched_autogroup_enabled,
  641. + .maxlen = sizeof(unsigned int),
  642. + .mode = 0644,
  643. + .proc_handler = proc_dointvec,
  644. + .extra1 = &zero,
  645. + .extra2 = &one,
  646. + },
  647. +#endif
  648. #ifdef CONFIG_PROVE_LOCKING
  649. {
  650. .procname = "prove_locking",
  651. Index: linux-2.6.36/init/Kconfig
  652. ===================================================================
  653. --- linux-2.6.36.orig/init/Kconfig
  654. +++ linux-2.6.36/init/Kconfig
  655. @@ -652,6 +652,18 @@ config DEBUG_BLK_CGROUP
  656.  
  657. endif # CGROUPS
  658.  
  659. +config SCHED_AUTOGROUP
  660. + bool "Automatic process group scheduling"
  661. + select CGROUPS
  662. + select CGROUP_SCHED
  663. + select FAIR_GROUP_SCHED
  664. + help
  665. + This option optimizes the scheduler for common desktop workloads by
  666. + automatically creating and populating task groups. This separation
  667. + of workloads isolates aggressive CPU burners (like build jobs) from
  668. + desktop applications. Task group autogeneration is currently based
  669. + upon task session.
  670. +
  671. config MM_OWNER
  672. bool
  673.  
  674. Index: linux-2.6.36/Documentation/kernel-parameters.txt
  675. ===================================================================
  676. --- linux-2.6.36.orig/Documentation/kernel-parameters.txt
  677. +++ linux-2.6.36/Documentation/kernel-parameters.txt
  678. @@ -1610,6 +1610,8 @@ and is between 256 and 4096 characters.
  679. noapic [SMP,APIC] Tells the kernel to not make use of any
  680. IOAPICs that may be present in the system.
  681.  
  682. + noautogroup Disable scheduler automatic task group creation.
  683. +
  684. nobats [PPC] Do not use BATs for mapping kernel lowmem
  685. on "Classic" PPC cores.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement