This is an idea about increasing startup speed: If we, in general, want to achieve state X, and the processes needed to be executed to reach C look like: 0 --> A --------. 0 --> B --> C -+-> X Now assume for each process, we record how long it runs (or how long it takes it to reach a state). After a boot we notice that A takes a long time, but B and C finish quickly: run: | -------------------------> time |AAAAAAAAAAAAAAAAAAAAAA| |BBBBCCCC | | |(state X reached) v tasks So at the next boot, we can launch B and C with a lower priority (to keep the CPUs busy). run: | -------------------------> time |AAAAAAAAAAAAAAAAA| |BBBBBBBCCCCCCC | | |(state X reached) v tasks It's a project scheduling problem, reducing the total duration is the goal. Of course, which resources are waited for might be varying from task to task (e.g. waiting for dhcp, waiting for disk, waiting for CPU time). In general, executing more in parallel is better. An implementation could work incrementally, increasing or decreasing the execution priority for the next run of a task based on the previous run duration. This can be applied for shutdown too, or whenever a state is to be reached.
Well, more often than not a service is slow at start-up simply because it waits for another service to complete. Your suggestion would work only if things are primarily CPU bound, and not bound to other deps...
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.