[Haifux] Implementing read() like UNIX guys like it

Shachar Raindel raindel at tx.technion.ac.il
Wed Apr 27 08:25:38 MSD 2011


On Tue, Apr 26, 2011 at 11:33 PM, Nadav Har'El <nyh at math.technion.ac.il> wrote:
> On Tue, Apr 26, 2011, Eli Billauer wrote about "Re: [Haifux] Implementing read() like UNIX guys like it":
>> >(...) Second, if the CPU *did* have "something useful" to do (run other
>> >processes,
>> >or whatever), it would, causing a bit more time to pass between the read()
>> >and it might return more than one byte. It will only return one byte when
>> >there's nothing better to do than calling read() all the time.
>> >
>> That's an interesting point, but I'm not 100% sure on that one. Why
>> would the scheduler take the CPU away, if the read() operation always
>> returns one byte or two? It would, eventually take it for a while, which
>> would let data stack up, but each process would get its fair slice.
>
> Hi,
>
> Well, I guess that under a very specific set of circumstances (and a fairly
> high throughput, on a modern CPU which can do a billion cycles a second),
> the read() of one byte will take exactly the amount of time that it takes
> for another byte to become ready. But when the incoming stream is slower than
> that, a read() will very often block, and can cause a context switch if
> other processes are waiting to run.
>

Note that if you have already implemented the "give away the CPU while
no input, until input arrives", adding the delay bit becomes
relatively simple (storing the jiffies value when writing the first
byte in the buffer, comparing it to now, if smaller than minimum,
going into timed sleep).

>> I'm not saying this would cause some real degradation, but on a slow
>> embedded processor, seeing 10% CPU usage on a process which is supposed
>> to just read data...? The calculation is a 200 MHz processor, 100 kB/sec
>> and 200 CPU clocks for each byte being processed on itself.
>
> I see. Like I said, I don't know how meaningful this "10%" figure is when
> you're talking about an endless-read()-busy-loop with nothing else to do
> (if there's nothing else to do, you don't care if it takes 100% cpu :-)).
> If you do have other things to do - in the same thread or in a different
> thread, the possibly this 10% figure would be reduced because read()s would
> start returning more than 1 byte. By the way, if in your hypothetical
> situation a read() takes 200 cycles to return, but the bytes arrive 2,000
> cycles apart, the following read() will block, and therefore possibly cause a
> context switch.

2 reasons why this might be important:
a. the impact of a "10%" CPU using task is above 10% of the processing
power - it pollutes caches, eats out CPU time in context switches as
well, and prevents "idle level" (nice'd) tasks from running.
b. a task using 10% CPU will prevent the CPU from becoming idle for
significant time slice. A non-idle CPU eats up much more energy than
an idle CPU. While Eli's application might be not so power sensitive,
many applications are. In some cases, the linux kernel goes into great
distances in postponing work and attempting to batch it to reduce the
amount of hardware wakeups required (see the tickless kernel efforts,
powertop and the related changes across the entire system, and the
laptop-mode hard-disk writes handling).

>
> But if you have the time and energy to program that additional read delay,
> then by all means, go ahead and try it. And do tell how much the difference
> was noticable in the end system.

I join the request about the "do tell". I would be especially
interested in hearing results of benchmark such as "run nice'd CPU
intensive task in parallel to the hardware interacting task, check
change in time-to-complete the CPU intensive task".

--Shachar



More information about the Haifux mailing list