Tuesday, December 22, 2009

8.1. Getting the Most Out of Volunteers











 < Day Day Up > 







8.1. Getting the Most Out of Volunteers





Why do volunteers work on free software projects?[1]

[1] This question was studied in detail, with interesting results,

in a paper by Karim Lakhani and Robert G. Wolf, entitled

"Why Hackers Do What They Do: Understanding

Motivation and Effort in Free/Open Source Software

Projects." See http://freesoftware.mit.edu/papers/lakhaniwolf.pdf.





When asked, many claim they do it because

they want to produce good software, or want to be personally involved

in fixing the bugs that matter to them. But these reasons are usually

not the whole story. After all, could you imagine a volunteer staying

with a project even if no one ever said a word in appreciation of his

work, or listened to him in discussions? Of course not. Clearly,

people spend time on free software for reasons beyond just an

abstract desire to produce good code. Understanding

volunteers' true motivations will help you arrange

things so as to attract and keep them. The desire to produce good

software may be among those motivations, along with the challenge and

educational value of working on hard problems. But humans also have a

built-in desire to work with other humans, and to give and earn

respect through cooperative activities. Groups engaged in cooperative

activities must evolve norms of behavior such that status is acquired

and kept through actions that help the group's

goals.





Those norms won't always arise by themselves. For

example, on some projects�experienced open source developers

can probably name several off the tops of their heads�people

apparently feel that status is acquired by posting frequently and

verbosely. They don't come to this conclusion

accidentally; they come to it because they are rewarded with respect

for making long, intricate arguments, whether or not that actually

help the project. Following are some techniques for creating an

atmosphere in which status-acquiring actions are also constructive

actions.







8.1.1. Delegation





Delegation

is not merely a way to spread the workload around; it is also a

political and social tool. Consider all the effects when you ask

someone to do something. The most obvious effect is that, if she

accepts, she does the task and you don't. But

another effect is that she is made aware that you trusted her to

handle the task. Furthermore, if you made the request in a public

forum, then she knows that others in the group have been made aware

of that trust too. She may also feel some pressure to accept, which

means you must ask in a way that allows her to decline gracefully if

she doesn't really want the job. If the task

requires coordination with others in the project, you are effectively

proposing that she become more involved, form bonds that might not

otherwise have been formed, and perhaps become a source of authority

in some subdomain of the project. The added involvement may be

daunting, or it may lead her to become engaged in other ways as well,

from an increased feeling of overall commitment.





Because of all these effects, it often makes sense to ask someone

else to do something even when you know you could do it faster or

better yourself. Of course, there is sometimes a strict economic

efficiency argument for this anyway: perhaps the opportunity cost of

doing it yourself would be too high�there might be something

even more important you could do with that time. But even when the

opportunity cost argument doesn't apply, you may

still want to ask someone else to take on the

task, because in the long run you want to draw that person deeper

into the project, even if it means spending extra time watching over

them at first. The converse technique also applies: if you

occasionally volunteer for work that someone else

doesn't want or have time to do, you will gain his

good will and respect. Delegation and substitution are not just about

getting individual tasks done; they're also about

drawing people into a closer commitment to the project.







8.1.1.1. Distinguish clearly between inquiry and assignment




Sometimes it is fair to expect that a person will accept a particular

task. For example, if someone writes a bug into the code, or commits

code that fails to comply with project guidelines in some obvious

way, then it is enough to point out the problem and thereafter behave

as though you assume the person will take care of it. But there are

other situations where it is by no means clear that you have a right

to expect action. The person may do as you ask, or may not. Since no

one likes to be taken for granted, you need to be sensitive to the

difference between these two types of situations, and tailor your

requests accordingly.





One thing that almost always causes people instant annoyance is being

asked to do something in a way that implies that you think it is

clearly their responsibility to do it, when they feel otherwise. For

example, assignment of incoming issues is particularly fertile ground

for this kind of annoyance. The participants in a project usually

know who is expert in what areas, so when a bug report comes in,

there will often be one or two people whom everyone knows could

probably fix it quickly. However, if you assign the issue over to one

of those people without her prior permission, she may feel she has

been put into an uncomfortable position. She senses the pressure of

expectation, but also may feel that she is, in effect, being punished

for her expertise. After all, the way one acquires expertise is by

fixing bugs, so perhaps someone else should take this one! (Note that

issue trackers that automatically assign issues to particular people

based on information in the bug report are less likely to offend,

because everyone knows that the assignment was made by an automated

process, and is not an indication of human expectations.)





While it would be nice to spread the load as evenly as possible,

there are certain times when you just want to encourage the person

who can fix a bug the fastest to do so. Given that you

can't afford a communications turnaround for every

such assignment ("Would you be willing to look at

this bug?" "Yes."

"Okay, I'm assigning the issue over

to you then."

"Okay."), you should simply make

the assignment in the form of an inquiry, conveying no pressure.

Virtually all issue trackers allow a comment to be associated with

the assignment of an issue. In that comment, you can say something

like this:







Assigning this over to you, jrandom, because you're

most familiar with this code. Feel free to bounce this back if you

don't have time to look at it, though. (And let me

know if you'd prefer not to receive such requests in

the future.)







This distinguishes clearly between the request

for assignment and the recipient's

acceptance of that assignment. The audience here

isn't only the assignee, it's

everyone: the entire group sees a public confirmation of the

assignee's expertise, but the message also makes it

clear that the assignee is free to accept or decline the

responsibility.











8.1.1.2. Follow up after you delegate




When you ask someone to do something, remember that you have done so,

and follow up with him no matter what. Most requests are made in

public forums, and are roughly of the form "Can you

take care of X? Let us know either way; no problem if you

can't, just need to know." You may

or may not get a response. If you do, and the response is negative,

the loop is closed�you'll need to try some

other strategy for dealing with X. If there is a positive response,

then keep an eye out for progress on the issue, and comment on the

progress you do or don't see (everyone works better

when they know someone else is appreciating their work). If there is

no response after a few days, ask again, or post saying that you got

no response and are looking for someone else to do it. Or just do it

yourself, but still make sure to say that you got no response to the

initial inquiry.





The purpose of publicly noting the lack of response is

not to humiliate the person, and your remarks

should be phrased so as not to have that effect. The purpose is

simply to show that you keep track of what you have asked for, and

that you notice the reactions you get. This makes people more likely

to say yes next time, because they will observe (even if only

unconsciously) that you are likely to notice any work they do, given

that you noticed the much less visible event of someone failing to

respond.











8.1.1.3. Notice what people are interested in




Another thing that makes people happy is to have their interests

noticed�in general, the more aspects of

someone's personality you notice and remember, the

more comfortable he will be, and the more he will want to work with

groups of which you are a part.





For example, there was a sharp distinction in the Subversion project

between people who wanted to reach a definitive 1.0 release (which we

eventually did), and people who mainly wanted to add new features and

work on interesting problems but who didn't much

care when 1.0 came out. Neither of these positions is better or worse

than the other; they're just two different kinds of

developers, and both kinds do lots of work on the project. But we

swiftly learned that it was important to not

assume that the excitement of the 1.0 drive was shared by everyone.

Electronic media can be very deceptive: you may sense an atmosphere

of shared purpose when, in fact, it's shared only by

the people you happen to have been talking to, while others have

completely different priorities.





The more aware you are of what people want out of the project, the

more effectively you can make requests of them. Even just

demonstrating an understanding of what they want, without making any

associated request, is useful, in that it confirms to each person

that she's not just another particle in an

undifferentiated mass.











8.1.2. Praise and Criticism





Praise and criticism are not

opposites; in many ways, they are very similar. Both are primarily

forms of attention, and are most effective when specific rather than

generic. Both should be deployed with concrete goals in mind. Both

can be diluted by inflation: praise too much or too often and you

will devalue your praise; the same is true for criticism, though in

practice, criticism is usually reactive and therefore a bit more

resistant to devaluation.





An important feature of technical culture is that detailed,

dispassionate criticism is often taken as a kind of praise (as

discussed in Section 6.1.4 in

Chapter 6), because of the implication that the

recipient's work is worth the time required to

analyze it. However, both of those

conditions�detailed, and

dispassionate
�must be met for this to be true. For

example, if someone makes a sloppy change to the code, it is useless

(and actually harmful) to follow up saying simply

"That was sloppy." Sloppiness is

ultimately a characteristic of a person, not of

their work, and it's important to keep your

reactions focused on the work. It's much more

effective to describe all the things wrong with the change, tactfully

and without malice. If this is the third or fourth careless change in

a row by the same person, it's appropriate to say

that�again without anger�at the end of your critique, to

make it clear that the pattern has been noticed.





If someone does not improve in response to criticism, the solution is

not more or stronger criticism. The solution is for the group to

remove that person from the position of incompetence, in a way that

minimizes hurt feelings as much as possible; see Section 8.3 later in this chapter for

examples. That is a rare occurrence, however. Most people respond

pretty well to criticism that is specific, detailed, and contains a

clear (even if unspoken) expectation of improvement.





Praise won't hurt anyone's

feelings, of course, but that doesn't mean it should

be used any less carefully than criticism. Praise is a tool: before

you use it, ask yourself why you want to use it.

As a rule, it's not a good idea to praise people for

doing what they usually do, or for actions that are a normal and

expected part of participating in the group. If you were to do that,

it would be hard to know when to stop: should you praise

everyone for doing the usual things? After all,

if you leave some people out, they'll wonder why.

It's much better to express praise and gratitude

sparingly, in response to unusual or unexpected efforts, with the

intention of encouraging more of such efforts. When a participant

seems to have moved permanently into a state of higher productivity,

adjust your praise threshold for that person accordingly. Repeated

praise for normal behavior gradually becomes meaningless anyway.

Instead, that person should sense that her high level of productivity

is now considered normal and natural, and only work that goes beyond

that should be specially noticed.





This is not to say that the person's contributions

shouldn't be acknowledged, of course. But remember

that if the project is set up right, everything that person does is

already visible anyway, and so the group will know (and the person

will know that the rest of the group knows) everything she does.

There are also ways to acknowledge someone's work by

means other than direct praise. You could mention in passing, while

discussing a related topic, that she has done a lot of work in the

given area and is the resident expert there; you could publicly

consult her on some question about the code; or perhaps most

effectively, you could conspicuously make further use of the work she

has done, so she sees that others are now comfortable relying on the

results of her work. It's probably not necessary to

do these things in any calculated way. Someone who regularly makes

large contributions in a project will know it, and will occupy a

position of influence by default. There's usually no

need to take explicit steps to ensure this, unless you sense that,

for whatever reason, a contributor is underappreciated.









8.1.3. Prevent Territoriality





Watch out for participants who try to stake

out exclusive ownership of certain areas of the project, and who seem

to want to do all the work in those areas, to the extent of

aggressively taking over work that others start. Such behavior may

even seem healthy at first. After all, on the surface it looks like

the person is taking on more responsibility, and showing increased

activity within a given area. But in the long run, it is destructive.

When people sense a "no

trespassing" sign, they stay away. This results in

reduced review in that area, and greater fragility, because the lone

developer becomes a single point of failure. Worse, it fractures the

cooperative, egalitarian spirit of the project. The theory should

always be that any developer is welcome to help out on any task at

any time. Of course, in practice, things work a bit differently:

people do have areas where they are more and less influential, and

non-experts frequently defer to experts in certain domains of the

project. But the key is that this is all voluntary: informal

authority is granted based on competence and proven judgement, but it

should never be actively taken. Even if the

person desiring the authority really is competent, it is still

crucial that he hold that authority informally, through the consensus

of the group, and that the authority never cause him to exclude

others from working in that area.





Rejecting or editing someone's work for technical

reasons is an entirely different matter, of course. There, the

decisive factor is the content of the work, not who happened to act

as gatekeeper. It may be that the same person happens to do most of

the reviewing for a given area, but as long as he never tries to

prevent someone else from doing that work too, things are probably

okay.





In order to combat incipient

territorialism, or even the appearance of it, many projects have

taken the step of banning the inclusion of author names or designated

maintainer names in source files. I wholeheartedly agree with this

practice: we follow it in the Subversion project, and it is more or

less official policy at the Apache Software Foundation. ASF

member Sander Striker puts it this way:







At the Apache Software foundation we discourage the use of author

tags in source code. There are various reasons for this, apart from

the legal ramifications. Collaborative development is about working

on projects as a group and caring for the project as a group. Giving

credit is good, and should be done, but in a way that does not allow

for false attribution, even by implication. There is no clear line

for when to add or remove an author tag. Do you add your name when

you change a comment? When you put in a one-line fix? Do you remove

other author tags when you refactor the code and it looks 95%

different? What do you do about people who go about touching every

file, changing just enough to make the virtual author tag quota, so

that their name will be everywhere?





There are better ways to give credit, and our preference is to use

those. From a technical standpoint author tags are unnecessary; if

you wish to find out who wrote a particular piece of code, the

version control system can be consulted to figure that out. Author

tags also tend to get out of date. Do you really wish to be contacted

in private about a piece of code you wrote five years ago and were

glad to have forgotten?







A software project's source code files are the core

of its identity. They should reflect the fact that the developer

community as a whole is responsible for them, and not be divided up

into little fiefdoms.





People sometimes argue in favor of author or maintainer tags in

source files on the grounds that this gives visible credit to those

who have done the most work there. There are two problems with this

argument. First, the tags inevitably raise the awkward question of

how much work one must do to get one's own name

listed there too. Second, they conflate the issue of credit with that

of authority: having done work in the past does not imply ownership

of the area where the work was done, but it's

difficult if not impossible to avoid such an implication when

individual names are listed at the tops of source files. In any case,

credit information can already be obtained from the version control

logs and other out-of-band mechanisms like mailing list archives, so

no information is lost by banning it from the source files

themselves.





If your project decides to ban individual names from source files,

make sure not to go overboard. For instance, many projects have a

contrib/ area where small tools and helper scripts

are kept, often written by people who are otherwise not associated

with the project. It's fine for those files to

contain author names, because they are not really maintained by the

project as a whole. On the other hand, if a contributed tool starts

getting hacked on by other people in the project, eventually you may

want to move it to a less isolated location and, assuming the

original author approves, remove the author's name,

so that the code looks like any other community-maintained resource.

If the author is sensitive about this, compromise solutions are

acceptable, for example:





# indexclean.py: Remove old data from a Scanley index.

#

# Original Author: K. Maru <kobayashi@yetanotheremailservice.com>

# Now Maintained By: The Scanley Project <http://www.scanley.org/>

# and K. Maru.

#

# ...







But it's better to avoid such compromises, if

possible, and most authors are willing to be persuaded, because

they're happy that their contribution is being made

a more integral part of the project.





The important thing is to remember that there is a continuum between

the core and the periphery of any project. The main source code files

for the software are clearly part of the core, and should be

considered as maintained by the community. On the other hand,

companion tools or pieces of documentation may be the work of single

individuals, who maintain them essentially alone, even though the

works may be associated with, and even distributed with, the project.

There is no need to apply a one-size-fits-all rule to every file, as

long as the principle that community-maintained resources are not

allowed to become individual territories is upheld.









8.1.4. The Automation Ratio





Try not to let humans do what machines

could do instead. As a rule of thumb, automating a common task is

worth at least 10 times the effort a developer would spend doing that

task manually one time. For very frequent or very complex tasks, that

ratio could easily go up to 20 or even higher.





Thinking of yourself as a "project

manager," rather than just another developer, may be

a useful attitude here. Sometimes individual developers are too

wrapped up in low-level work to see the big picture and realize that

everyone is wasting a lot of effort performing automatable tasks

manually. Even those who do realize it may not take the time to solve

the problem: because each individual performance of the task does not

feel like a huge burden, no one ever gets annoyed enough to do

anything about it. What makes automation compelling is that the small

burden is multiplied by the number of times each developer incurs it,

and then that number is multiplied by the number

of developers.





Here, I am using the term

"automation" broadly, to mean not

only repeated actions where one or two variables change each time,

but any sort of technical infrastructure that assists humans. The

minimum standard automation required to run a project these days was

described in Chapter 3, but each project may

have its own special problems too. For example, a group working on

documentation might want to have a

web site displaying the most up-to-date versions of the documents at

all times. Since documentation is often written in a markup language

like XML, there may be a compilation step, often quite intricate,

involved in creating displayable or downloadable documents. Arranging

a web site where such compilation happens automatically on every

commit can be complicated and time-consuming�but it is worth

it, even if it costs you a day or more to set up. The overall

benefits of having up-to-date pages available at all times are huge,

even though the cost of not having them might

seem like only a small annoyance at any single moment, to any single

developer.





Taking such steps eliminates not merely wasted time, but the griping

and frustration that ensue when humans make missteps (as they

inevitably will) in trying to perform complicated procedures

manually. Multi-step, deterministic operations are exactly what

computers were invented for; save your humans for more interesting

things.







8.1.4.1. Automated testing




Automated test runs are helpful for

any software project, but especially so for open source projects,

because automated testing (especially regression testing) allows

developers to feel comfortable changing code in areas they are

unfamiliar with, and thus encourages exploratory development. Since

detecting breakage is so hard to do by hand�one essentially has

to guess where one might have broken something, and try various

experiments to prove that one didn't�having

automated ways to detect such breakage saves the project a

lot of time. It also makes people much more

relaxed about refactoring large swaths of code, and therefore

contributes to the software's long-term

maintainability.







Regression Testing





Regression

testing
means testing for the reappearance of

bugs that were already fixed. The purpose of regression testing is to

reduce the chances that code changes will break the software in

unexpected ways. As a software project gets bigger and more

complicated, the chances of such unexpected side effects increase

steadily. Good design can reduce the rate at which the chances

increase, but it cannot eliminate the problem entirely.





As a result, many projects have a test suite,

a separate program that invokes the project's

software in ways that have been known in the past to stimulate

specific bugs. If the test suite succeeds in making one of these bugs

happen, this is known as a regression, meaning

that someone's change unexpectedly unfixed a

previously fixed bug.





See also http://en.wikipedia.org/wiki/Regression_testing.








Regression testing is not a panacea. For one thing, it works best for

programs with batch-style interfaces. Software that is operated

primarily through graphical user interfaces is much harder to drive

programmatically. Another problem is that the regression test suite

framework itself can often be quite complex, with a learning curve

and maintenance burden all its own. Reducing this complexity is one

of the most useful things you can do, even though it may take a

considerable amount of time. The easier it is to add new tests to the

suite, the more developers will do so, and the fewer bugs will

survive to release. Any effort spent making tests easier to write

will be paid back manyfold over the lifetime of the project.





Many projects have a

"Don't break the

build!"
rule, meaning:

don't commit a change that makes the software unable

to compile or run. Being the person who broke the build is usually

cause for mild embarrassment and ribbing. Projects with regression

test suites often have a corollary rule: don't

commit any change that causes tests to fail. Such failures are

easiest to spot if there are automatic nightly runs of the entire

test suite, with the results mailed out to the development list or to

a dedicated test-results mailing list; that's

another example of a worthwhile automation.





Most volunteer developers are willing to take the extra time to write

regression tests, when the test system is comprehensible and easy to

work with. Accompanying changes with tests is understood to be the

responsible thing to do, and it's also an easy

opportunity for collaboration: often two developers will divide up

the work for a bug fix, with one writing the fix itself, and the

other writing the test. The latter developer may often end up with

more work, and since writing a test is already less satisfying than

actually fixing the bug, it is imperative that the test suite not

make the experience more painful than it has to be.





Some projects go even further, requiring that a new test accompany

every bug fix or new feature. Whether this is a

good idea or not depends on many factors: the nature of the software,

the makeup of the development team, and the difficulty of writing new

tests. The

CVS (http://www.cvshome.org/) project has long had

such a rule. It is a good policy in theory, since CVS is version

control software and therefore very risk-averse about the possibility

of munging or mishandling the user's data. The

problem in practice is that CVS's regression test

suite is a single huge shell script (amusingly named

sanity.sh), hard to read and hard to modify or

extend. The difficulty of adding new tests, combined with the

requirement that patches be accompanied by new tests, means that CVS

effectively discourages patches. When I used to work on CVS, I

sometimes saw people start on and even complete a patch to

CVS's own code, but give up when told of the

requirement to add a new test to sanity.sh.





It is normal to spend more time writing a new regression test than on

fixing the original bug. But CVS carried this phenomenon to an

extreme: one might spend hours trying to design

one's test properly, and still get it wrong, because

there are just too many unpredictable complexities involved in

changing a 35,000-line Bourne shell script. Even longtime CVS

developers often grumbled when they had to add a new test.





This situation was due to a failure on all our parts to consider the

automation ratio. It is true that switching to a real test

framework�whether custom-built or off-the-shelf�would

have been a major effort.[2] But neglecting to do

so has cost the project much more, over the years. How many bug fixes

and new features are not in CVS today, because

of the impediment of an awkward test suite? We cannot know the exact

number, but it is surely many times greater than the number of bug

fixes or new features the developers might forgo in order to develop

a new test system (or integrate an off-the-shelf system). That task

would only take a finite amount of time, while the penalty of using

the current test suite will continue forever if nothing is done.

[2] Note that there would be no

need to convert all the existing tests to the new framework; the two

could happily exist side by side, with old tests converted over only

as they needed to be changed.





The point is not that having strict requirements to write tests is

bad, nor that writing your test system as a Bourne shell script is

necessarily bad. It might work fine, depending on how you design it

and what it needs to test. The point is simply that when the test

system becomes a significant impediment to development, something

must be done. The same is true for any routine process that turns

into a barrier or a bottleneck.











8.1.5. Treat Every User as a Potential Volunteer





Each interaction with a user is an

opportunity to get a new volunteer. When a user takes the time to

post to one of the project's mailing lists, or to

file a bug report, he has already tagged himself as having more

potential for involvement than most users (from whom the project will

never hear at all). Follow up on that potential: if he described a

bug, thank him for the report and ask him if he wants to try fixing

it. If he wrote to say that an important question was missing from

the FAQ, or that the program's documentation was

deficient in some way, then freely acknowledge the problem (assuming

it really exists) and ask if he's interested in

writing the missing material himself. Naturally, much of the time the

user will demur. But it doesn't cost much to ask,

and every time you do, it reminds the other listeners in that forum

that getting involved in the project is something anyone can do.





Don't limit your goals to acquiring new developers

and documentation writers. For example, even training people to write

good bug reports pays off in the long run, if you

don't spend too much time per

person, and if they go on to submit more bug reports in the

future�which they are more likely to do if they got a

constructive reaction to their first report. A constructive reaction

need not be a fix for the bug, although that's

always the ideal; it can also be a solicitation for more information,

or even just a confirmation that the behavior is

a bug. People want to be listened to. Secondarily, they want their

bugs fixed. You may not always be able to give them the latter in a

timely fashion, but you (or rather, the project as a whole) can give

them the former.





A corollary of this is that developers should not express anger at

people who file well-intended but vague bug reports. This is one of

my personal pet peeves; I see developers do it all the time on

various open source mailing lists, and the harm it does is palpable.

Some hapless newbie will post a useless report:







Hi, I can't get Scanley to run. Every time I start

it up, it just errors. Is anyone else seeing this problem?







Some developer�who has seen this kind of report a thousand

times, and hasn't stopped to think that the newbie

has not�will respond like this:







What are we supposed to do with so little information? Sheesh. Give

us at least some details, like the version of Scanley, your operating

system, and the error.







This developer has failed to see things from the

user's point of view, and also failed to consider

the effect such a reaction might have on all the

other people watching the exchange. Naturally a

user who has no programming experience, and no prior experience

reporting bugs, will not know how to write a bug report. What is the

right way to handle such a person? Educate them! And do it in such a

way that they come back for more:







Sorry you're having trouble. We'll

need more information in order to figure out what's

happening here. Please tell us the version of Scanley, your operating

system, and the exact text of the error. The very best thing you can

do is send a transcript showing the exact commands you ran, and the

output they produced. See http://www.scanley.org/how_to_report_a_bug.html

for more.







This way of responding is far more effective at extracting the needed

information from the user, because it is written to the

user's point of view. First, it expresses sympathy:

You had a problem; we feel your pain. (This is

not necessary in every bug report response; it depends on the

severity of the problem and how upset the user seemed.) Second,

instead of belittling her for not knowing how to report a bug, it

tells her how, and in enough detail to be actually useful�for

example, many users don't realize that

"show us the error" means

"show us the exact text of the error, with no

omissions or abridgements." The first time you work

with such a user, you need to be specific about that. Finally, it

offers a pointer to much more detailed and complete instructions for

reporting bugs. If you have successfully engaged with the user, she

will often take the time to read that document and do what it says.

This means, of course, that you have to have the document prepared in

advance. It should give clear instructions about what kind of

information your development team wants to see in every bug report.

Ideally, it should also evolve over time in response to the

particular sorts of omissions and misreports users tend to make for

your project.





The Subversion project's bug reporting instructions

are a fairly standard example of the form (see Appendix D). Notice how they close with an invitation

to provide a patch to fix the bug. This is not because such an

invitation will lead to a greater patch/report ratio�most users

who are capable of fixing bugs already know that a patch would be

welcome, and don't need to be told. The

invitation's real purpose is to emphasize to all

readers, especially those new to the project or new to free software

in general, that the project runs on volunteer contributions. In a

sense, the project's current developers are no more

responsible for fixing the bug than is the person who reported it.

This is an important point that many new users will not be familiar

with. Once they realize it, they're more likely to

help make the fix happen, if not by contributing code then by

providing a more thorough reproduction recipe, or by offering to test

fixes that other people post. The goal is to make every user realize

that there is no innate difference between her

self and the people who work on the project�that

it's a question of how much time and effort one puts

in, not a question of who one is.





The admonition against responding angrily does not apply to rude

users. Occasionally people post bug reports or complaints that,

regardless of their informational content, show a sneering contempt

at the project for some failing. Often such people are alternately

insulting and flattering, such as the person who posted this to a

Subversion mailing list:







Why is it that after almost 6 days there still

aren't any binaries posted for the windows

platform?!? It's the same story every time and

it's pretty frustrating. Why aren't

these things automated so that they could be available immediately??

When you post an "RC" build, I

think the idea is that you want users to test the build, but yet you

don't provide any way of doing so. Why even have a

soak period if you provide no means of testing??







Initial response to this rather inflammatory post was surprisingly

restrained: people pointed out that the project had a published

policy of not providing official binaries, and said, with varying

degrees of annoyance, that he ought to volunteer to produce them

himself if they were so important to him. Believe it or not, his next

post started with these lines:







First of all, let me say that I think Subversion is awesome and I

really appreciate the efforts of everyone involved. [...]







...and then he went on to berate the project

again for not providing binaries, while still

not volunteering to do anything about it. After that, about 50 people

just jumped all over him, and I can't say I really

minded. The "zero-tolerance" policy

toward rudeness advocated in Section 2.4.2 in Chapter 2 applies to

people with whom the project has (or would like to have) a sustained

interaction. But when someone makes it clear from the start that he

is going to be a fountain of bile, there is no point making him feel

welcome.





Such situations are fortunately quite rare, and they are noticeably

rarer in projects that make an effort to engage users constructively

and courteously from their very first interaction.



















     < Day Day Up > 



    No comments: