View Single Post
  #108  
Old September 13th 17, 06:15 PM posted to rec.bicycles.tech
Radey Shouman
external usenet poster
 
Posts: 1,747
Default program to compute gears, with table

John B. writes:

On Tue, 12 Sep 2017 16:30:42 -0400, Radey Shouman
wrote:

John B. writes:

On Tue, 12 Sep 2017 06:06:56 +0200, Emanuel Berg
wrote:

John B. wrote:

That isn't true at all. I have definitely
improved the speed of a C program by using an
assembler language sub routines and even had
two C compilers that would compile the same
program into two different sizes that
performed the same "test" program at two
different speeds.

Obviously two different programs will be of
different sizes and run at different speeds.

But that wasn't what I said at all. As I said the same code compiled
on two different compiler resulted in both a different size compiled
application and, as well, a speed difference when running.

With compilers to do optimization, and with
much increased hardware to make optimization
unnecessary to begin with, there is close to
zero gain re-writing C into assembler, and its

Except when it does make a difference.

an undertaking that isn't proportional to that
gain. So it is rather done when there is a need
to manipulate hardware directly or in ways
which the high-level language isn't suited for.

I'm not sure that is correct in all cases although of course modern
computers run at speeds that make the slower software appear to be
satisfactory. But I did a search on the question "is modern software
written in assembler" and the first hit replied:

"Probably more than most people think, especially in the
microcontroller field. I write in assembler when it's appropriate,
which for the kind of work I do is most of the time


I write in assembler every day, not on any rational basis, but because
that's how my boss did it back in the day.

The big difference between new processors and old, from my point of
view, is the much deeper instruction pipelines. In order to get the
most from these machines one should write in the least straightforward
way possible, doing a little of this, then a little of that, so that
there is as long a time as possible between setting some register's
value and using it. Compilers are good at this, human beings not so
much, especially when the code has to be debugged and modified at some
time in the unknowable future.

On the other hand, in assembler one may use the low level processor
behavior to make sure things are done in an efficient way -- for example
carry and overflow conditions are straightforwardly but non-portably
checked. In C, if you want to make sure the compiler does what you
think it should you have to check the generated assembly, and possibly
contort your code to make your intention "clear".


Ultimately I disassembled the two test programs from the two different
C compilers and found that the difference between the two was that the
Microsoft compiler saved the state, all the registers, etc., then
called the "sub routine" then recovered the state, all the registers,
etc., and went on to the next step. A sort of bullet proofing I guess
you'd call it. The other compiler apparently figured that the
programmer knew what he was doing and if you wrote "write("Good
Morning\n");" it just went ahead and did it.


That's called "inlining". Not possible with compiled library routines,
and increases code size, sometimes dramatically. Can also worsen icache
behavior, sometimes to the point of running slower.


--
Ads
 

Home - Home - Home - Home - Home