Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 4 Jun 1999 12:27:54 -0700 (PDT)
From:      Julian Elischer <julian@whistle.com>
To:        Nate Williams <nate@mt.sri.com>
Cc:        Matthew Dillon <dillon@apollo.backplane.com>, dyson@iquest.net, freebsd-hackers@FreeBSD.ORG
Subject:   Re: 3.2-stable, panic #12 
Message-ID:  <Pine.BSF.3.95.990604120347.16696A-100000@current1.whistle.com>
In-Reply-To: <199906041508.JAA27044@mt.sri.com>

next in thread | previous in thread | raw e-mail | index | archive | help


On Fri, 4 Jun 1999, Nate Williams wrote:

> 
> >     The biggest mistake that programmers working on a large project make is
> >     when they do *not* rewrite portions of the code that need to be
> >     rewritten.
> 
> Most good software engineering books would disagree with you.  The
> *BIGGEST* mistake the most programmers make is re-writing functional
> code to conform to their style of programming, rather than understand
> why the original programmers did it the way they did it.

On the other hand there comes a time when so many patches have been
applied, in an attempt to not modify more than a small part of the code
that it starts to become inefficient and hard to understand.

it is well known that tungn code can usually only get small improvements
but that a a HUGE improvement is likely to come from a breakthrough in the 
way in whichthe problem is envisioned, leading to a completely new way of 
tackling the problem.

There is MUCH of the present Kernel that falls into this category.


> 
> NIH is *FAR* too common of a problem.  Bugs *rarely* require a complete
> re-write of the code.

NIH is not the same as "Hey I can do this in a differn t way in half the
time/code". NIH is "I can do the same thing with about the same result,
so I will".

> 
> This isn't to say that often-times it's *easier* to just re-write it
> from scratch than it is to understand what the original code was doing
> in the first place, but this often leads to errors that were fixed by
> the original authors in a non-obvious manner being brought back in.

A totally different approach often avoids those problems entirely.
Especially if the original authors bothered to comment them, because the
new aproach was developed with them in mind. (of course it can raise nerw
problems).

> 
> The above problem is almost always a problem with the original author
> not doing a proper job of documenting the work, but it doesn't justify
> ripping everything out and starting over from scratch.

true, though I'd add that sometimes the "patched index" of code does get
so high that it does justify a rewrite.



> 
> I know of *NO* programmer who does not delight in completely ripping out
> and replacing existing code with code that he has written from scratch.
> It's great fun, and it allows the person to feel better about the
> system, themselves, and make sure that they can debug the existing code
> better.  I do it all the time.  But, I know for a fact that it's rarely
> the right thing to do, especially when the folks who 'went before me'
> aren't 1st year CS students, but are seasoned professionals who have a
> clue and didn't do things on a whim.

I think that the advantage of experience is knowing WHEN to do so.
This is a  judgement call, that can only be made by peole who see:
1/ the old cohde, and understand it.
2/ the new algorythm, and see how it can be implemented.

Anyone who can't see both sides cannot make that call and should BUTT OUT.

> 
> Almost *ALL* of the BSD kernel code (and most of the userland code as
> well) falls into the class of code that is written by seasoned
> professionals.  They are not infallible, but they almost always have a
> reason for why they did things the way they did.

Very true. THey had a reason in 1988 and part of our decision is
to evaluate how valid those reasons are now.

In 1988, the kernel consumed 300K of a 4 MB machine, or, about 10% of
physical RAM. In 1999 a kernel may consume between 2MB of an 8MB machine
and 8MB of a 512 MB machine (or between 25% and 2%). In 1988 the processor
wascapable of doing 8 operation sduring a memory cache miss. In 1997 it
was capable of doing about 50, and in 1999 it's capable of doing about 30
with new fast RAMs.  
In 1988 the average appliction process size was 200K with data. The
machine had 8MB. In 1999 the app size is 2MB but the machine has 64+ MB.

These all mean that we need to re-evaluate things. We can change the
space/speed tradeoffs made in 1988 in many places for example.

> 
> Does it mean you should never re-write entire portions of code?  Of
> course not, but it should never be taken lightly, and *IF* the original
> programmers are there and willing (and capable) to explain things to
> you, then that should be taken advantage of, and not ignored.

The problem was not that the original programmers were being ignored.
It's that the original programmers found it difficult to express to a
newcommer, the subtleties of what they had internalised years before.

Both sides showed remarkable lack of patience.. Matt was in a hurry and
John was too busy to stop and really put his ideas down in simple terms.
Remember however that John had "retired" from FreeBSD, so there was no 
original surity that he would give any help at all (though of course
those of us that know him knew he could always be asked for advise).

I think the whole thing was a storm in a tea-cup and I thing that Matt
has been int he code now long enough that just the passing of time and 
experience has made the decision as to whether he should get commit privs
back purely academic.. 


julian




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.3.95.990604120347.16696A-100000>