MOO-cows Mailing List Archive
[Prev][Next][Index][Thread]
[cunkel@us.itd.umich.edu: Re: Server Improvement]
------- Start of forwarded message -------
Return-Path: moo-cows-errors@parc.xerox.com
Date: Tue, 18 Mar 1997 17:13:50 PST
From: Christopher Unkel <cunkel@us.itd.umich.edu>
Reply-To: Christopher Unkel <cunkel@us.itd.umich.edu>
To: moo-cows@parc.xerox.com
Subject: Re: Server Improvement
In-Reply-To: <199703181725.JAA19073@eng4.sequent.com>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Sender: MOO-Cows-Errors@parc.xerox.com
Precedence: bulk
On Tue, 18 Mar 1997, Ben Jackson wrote:
> I would strongly recommend that anyone working on performance enhancements
> to any software use profiling information rather than relying on the
> "divine inspiration" method. No matter how slow a particular function
> is it won't matter how fast you make it if it's not contributing a
> significant percentage of the overall time. And only profiling can tell
> you if your changes had the desired effect. In the case of run(), which
> is hard to profile because so much of the server's time is spent there,
> line profiling is useful, and can show you exactly how often particular
> cases are executed and whether or not its worth your time to speed them
> up.
I can't agree with this enough. But what I'd really like to comment on is
one difficulty I've had while working on WinMOO--the lack of testing
cases.
The server is a relatiely complex piece of software, and verifying that it
functions correctly is essentially impossible without a set of tests that
use the majority of its functionality. I don't have that; the best I can
do at the moment is to test to make sure that it works "overall" (does it
still load a database?), to test changed portions to make sure they
function, and then to release the software, and fix bugs as they turn up
in wider use. This lack of formal testing is one reason that I've kept
the "beta" label on WinMOO; until I've done thorough testing of the
software as well as a source review, I feel uncomfortable removing the
beta designation.
Similarly, I would like to do profiling work and look at performance
tuning, but without some "typical" (I know, there may be no such thing)
input cases to profile with, there's no way to make accurate
determinations of what really needs work, and whether changes really
improve performance. In this case, I would be unlikely to be satisfied
with a "laboratory-generated" input case that I or someone else
constructed; doing performance tuning with such an input would make the
server faster for what the test case does and how whoever wrote the test
case thought the server is being used--which might or might not correspond
with making the server faster in the ways it's really being used.
What would be helpful from my point of view, and I'm guessing from other
MOO developers' point of view, would be:
1. An external tool that could be used to apply test cases to the MOO and
verify that they produce the correct results. Probably all it has to do
is to open a number of connections to the MOO, apply input to the MOO in
some order, and verify the output in some fashion (bearing in mind that
certain things may be different, like the time, etc.).
2. A database and test data for the above tool that tests as much of the
server functionality as possible. A standard technique is to use line
coverage profiling to see how much of the software is actually tested.
3. Data to reproduce what actually happens on some real MOOs. In this
case, the tool probably doesn't have to verify the results, just generate
the input.
One technique would be to use the server-log feature to get data from
several running MOOs for reproduction. Obviously, this could have
privacy implications for the MOO's users; that would need to be sorted
out.
Anyone have thoughts on the matter?
--Chris cunkel@umich.edu
ResComp Senior Network Support Technician (SNST)
Home Page: http://www-personal.engin.umich.edu/~cunkel/
This signature no verb.
------- End of forwarded message -------
Home |
Subject Index |
Thread Index