Forsythe1993
The Construction of Work in Artificial Intelligence
Diana E. Forsythe
Abstract
Although technology is ojien viewed as value-free, an anthropological perspective suggests that technological tools embody values and assumptions of their builders. Drawing upon extendedfield research, this article investigates the construction of work in the expert systems community of artificial intelligence (AI). Describing systematic deletions in practitioners' representations of their own work, the article relates these to both the selectivity of conventional knowledge acquisition procedures and the tendency of expert system to (in thepractitioners'wordr) "fall off the knowledge clifl "Although system builders see the latter problem as purely technical, this article suggests that it is also the result of nontechnical factors, including the system builders'own tacit assump- tions. This article supports the view that technology has a cultural dimension.
Introduction
The anthropology of science and technology
In contradistinction to the commonsense view of technology as value-free, and thus to be judged largely according to whether it "works" or not, a major theme of the developing anthropology of science and technology (Hess 1992) is that technological tools contain embedded values. From an anthropological perspective, such tools embody values and assumptions held (often tacitly) by those who construct them. Thus, for exam- ple, tools of experimental high-energy physics embody beliefs about gender (Traweek 1988), knowledge-based systems encode assumptions about the relation between plans and human action (Suchman 1987) and about the order of particular work processes (Sachs forthcoming; Such- man 1992), and a hypermedia sys- tem for high school students incorpo- rates a cultural theory about choice, education, and the individualistic nature of human knowledge (Nyce and Bader forthcoming).
Culture as what people take for granted
This implies that technology has a cultural dimension. As understood in interpretative anthropology, the approach taken in this article, culture defines what people take for granted: the basic categories they use to make sense of the world and to decide how to act in it (Geertz 1973, 1983b). In addition to the broader cultural backgrounds in relation to which any scientific practice takes place, academic disciplines define ways of viewing and acting in the world that can also be described as cultural. Thus, as scientists address the choices and problems that inevitably arise in the course of their practice (Pfaffenberger 1992, 498-99), the solutions that they construct reflect the cultural realities of their own social and disciplinary milieux.
Case: expert systems community within AI
- How work is understood (represented)
- How this understanding affects their work
- An extension of Suchman's essay, which merely juxtaposes the two
- Select application of the term "work"
This article takes a case from the expert systems community within artificial intelligence (AI), to examine how particular assumptions held by practitioners come to be embedded in the tools that they construct. Expert systems are complex computer programs designed to do work that requires intelligent decision making. As such, they embody explicit ideas about such matters as useful problem-solving strategies. Expert systems also embody some ideas that are less explicit, central among which are beliefs about the meaning of knowledge and work. Elsewhere, I address the construction of knowledge in AI, investigating what system builders mean by knowledge and how their epistemological stance is incorporated in the expert systems they produce (Forsythe forthcoming). Here, I take up the related topic of the construction of work in AI. As with the construction of knowledge, the construction of work can be understood in two ways, both of which I propose to address. These are, first, the distinctive way in which the notion of work is understood by the practitioners to be described and, second, the way in which this particular understanding of work affects their system-building procedures and thus the resultant systems. I will try to show (1) that practi- tioners apply the term work in a very selective manner to their own professional activities and (2) that this selective approach carries over to the way in which they investigate the work of the human experts whose practice knowledge-based systems are intended to emulate. Then I will suggest (3) that the resultant partial representation of expert knowledge encoded in the knowledge base of such systems affects they way the systems themselves finally work, contributing to their fallibility when encountering real-world situations.
The Culture of AI
The material presented here is part of an ongoing investigation of what one might call the culture of artificial intelligence (Forsythe 1992, forthcom- ing).' This research focuses on the relationship between the values and as- sumptions that a community of AI practitioners bring to their work, the practices that constitute that work, and the tools constructed in the course of that work. The A1 specialists I describe view their professional work as 462 Science, Technology, & Human Values science (and, in some cases, engineering), which they understand in positivist terms as being beyond or outside culture. However, detailed observation of their daily work suggests that the truths and procedures of AI are in many ways culturally contingent. The scientists' work and the approach they take to it make sense in relation to a particular view of the world that is taken for granted in the laboratory. Documenting that worldview is a central goal of this research. However, what the scientists try to do does not always work as they believe it should, leading to confusions and ironies that I attempt to document as well.
Source of the data
Data for this research have been gathered in the course of an extended anthropological field study. Since 1986, I have been a full-time participant observer in five different expert systems laboratories, four in academe and one in industry. The vast majority of that time has been spent in two of these laboratories. To help protect the identity of my informants, I will present the material below under the collective label the Lab.
In addition to laboratory-based participant observation, this research has involved taking part in meetings with representatives of various funding agencies, in national conferences, and in research and writing teams that have produced proposals and publications in AI and related fields. By moving in and out of a range of laboratory and other professional settings, I have attempted to study a scientific community rather than to focus exclusively on the laboratories as bounded social entities. This community-centered approach to the study of science and technology follows that developed by Traweek (1988), extending the laboratory-centered approach pioneered by Latour and Woolgar (1986), Knorr-Cetina (198 I), and Lynch (1985).
Background
Expert Systems
Knowledge engineering
- Decision-making
Expert systems are constructed through a process known as "knowledge engineering" (Forsythe forthcoming) by practitioners in AI, some of whom identify themselves as knowledge engineers. Each expert system is intended to automate decision-making processes normally undertaken by a human expert by capturing and coding in machine-readable form the background knowledge and rules of thumb (heuristics) used by the expert to make decisions in a particular subject area (domain). This information is encoded in the system's knowledge base, which is then manipulated by the system's inference engine to reach conclusions relating to the tasks at hand.
The cycle
- Compare to Selby
- Knowledge acquisition = converting from Human to Machine form
- Compare to process of coding
Building an expert system typically involves carrying out the following steps: (1) collecting information from one or more human informants or from documen- tary sources; (2) ordering that information into procedures (e.g., rules and constraints) relevant to the operations that the prospective system is intended to perform; and (3) designing or adapting a computer program to apply these rules and constraints in performing the designated operations. The first two steps in this series-that is, the gathering of information and its translation into machine-readable form-constitute the process known as knowledge acquisition. The early stages of knowledge acquisition often include extended face-to-face interviewing of one or more experts, a process sometimes referred to as knowledge elicitation.
Falling off the Knowledge Cliff
- Knowledge-bases
- Tacit knowledge = culture != plans
- Compare to the situated knowledge of Suchman
Expert systems contain stored information (referred to as knowledge) plus encoded procedures for manipulating that knowledge. One problem with such systems to date is that they are both narrow and brittle. That is, although they may function satisfactorily within narrow limits-a specific application, problem area, or domain-they tend to fail rapidly and completely when moved vary far from those limits or when faced with situations that the system builders did not anticipate. This is referred to as the tendency of such systems to "fall off the knowledge cliff."3 Various knowledge-related problems can befall systems in the real world. I will provide two example that relate to the theme of this article; instead of, in different ways, each reflects the problem of tacit knowledge. A well-known example of a knowledge-related failure is provided by MYCIN, one of the first expert systems for medical diagnosis. Given a male patient with a bacterial infection, MYCIN suggested that one likely source of infection might be a prior amniocentesis. Although this is absurd, no one had thought to build into the system's knowledge base the information that men do not get pregnant. Therefore, it did not prune amnio- centesis from the menu of possible sources of infection in men (Buchanan and Shortliffe 1984,692). Information about where babies come from is representative of a large class of knowledge that experts might not think to explain to interviewers, but which is clearly essential for correct inference in certain areas of human activity. Without such taken-for-granted background knowledge about the world, expert systems tend to fall off the knowledge cliff. In the case of MYCIN, information was omitted from a knowledge base, presumably because of the system builders' tacit assump- tion that everybody knows that men do not get pregnant. This cultural assumption works well in the context of communication between humans over the age of 6, but is inappropriate when applied to computers.
A case of inappropriate information
In a case documented by Sachs (forthcoming), on the other hand, a knowledge-related problem arose because inappropriate information was built into a knowledge base. Sachs descries an expert system for inventory control that cannot accommodate the information that the actual supply of a given part differs from what the system "believes" it should be according to assumptions encoded by the system builders. She reports that seasoned users sometimes resort to "tricking" the system, knowingly entering false information to work around these built-in assumptions may assume that people We on the shop floor are perfectly aware that work does not always take place according to official procedures. However, it appears that this piece of background knowledge may not have been built into the knowledge base in such a way as to accommodate the complexity of real-life situations. The result is a system that falls off the knowledge cliff, able to function as it should only if users sometimes distort their input to conform to the internal "reality" encoded in the system.
3 AI explanations for the failure
Within artificial intelligence, problems like this have been attributed to three causes. First, the limits of technology: At present it is not feasible to store enough "knowledge" to enable expert systems to be broadly applicable. Second, such systems do not themselves contain so-called deep models of the problem areas within which they work, either because there is no ade- quate theory in that area or because the technology for representing models is not adequate (Karp and Wilkins 1989). And third, expert systems do not have what A1 people refer to as "common sense" (Lenat and Feigenbaum 1987; McCarthy 1984).
Attempts to solve these problems
Research in AI is attempting to solve these problems. For example, there is now a massive effort underway to try to embed common sense into expert systems. One such effort is the CYC project at Microelectronics and Computer Technology Corporation (MCC) in Texas (Guha and Lenat 1990; Lenat and Guha 1990; Lenat, Prakash, and Shepherd 1986), which is developing an enormous computerized knowledge base intended to encode the knowledge necessary to read and understand "any encyclopedia article" (Guha and Lenat 1990,34). In the words of its builders, "To build CYC, we must encode all the world's knowledge down to some level of detail; there is no way to finesse this" (Lenat, Prakash, and Shepherd 1986). The intended outcome of this project will be a generalized commonsense knowledge base that will apply to a wide range of systems and thus keep those systems from falling off the knowledge cliff.
Forsythe's view: fault of their assumptions
As a participant observer in the world of expert systems, I have a somewhat different perspective from AI specialists on why these systems are narrow and brittle. Current technical and representational capabilities may well impose limitations, as my informants insist; I am not in a position to evaluate that claim. After observing the process of building the systems, however, I believe that there is another source of difficulty that also affects the way expert systems perform, but of which there is very little discussion in AI. In my view, some of the problems that the scientists attribute to the limitations of technology are actually reflections of implicit assumptions and conceptual orientations of the system builders themselves.
AI vs anthro on knowledge, information and evaluation
- Knowledge "out there" vs. local knowledge
In previous publications, I have pursued this theme with respect to the meaning and implications for design of three basic concepts in AI: the notions of knowledge (Forsythe forthcoming), information (Forsythe et al. 1992), and evaluation (Forsythe and Buchanan 1991).' All of these concepts are interpreted more formally and narrowly in A1 than they are in, for example, anthropology. The different assumptions about the nature of knowledge held by AI specialists and anthropologists are illustrated by their different reac- tions to the CYC project mentioned above. To many researchers in AI, this project makes obvious sense. Because they tend to see knowledge as "out there" and as universal in nature (Forsythe forthcoming), building a gener- alized commonsense knowledge base seems to them a challenging but mean- ingful and worthwhile goal. In contrast, anthropologists typically react to this idea as absurd; given the anthropological view of common sense (and other) knowledge as cultural and, therefore, local in nature (Geertz 1983a), the notion of universally applicable common sense is an oxymoron. Thus, if we regard academic disciplines as "intellectual villages," as Geertz (1983b, 157), suggests, the villagers of AI and cultural anthropology see the world in distinctly different ways.
What Is the Work of AI?
Continuing this exploration into the shared assumptions and practices of the world of AI, I turn now to the way system builders consf uct the notion of work. Understanding how they view work (their own as well as that of the experts whose work their systems are designed to emulate or replace) illuminates a good deal about why expert systems are narrow and brittle- and thus at risk of falling off the knowledge cliff. Beginning with a brief description of the work setting, I will present some material on the way in which the Lab members describe their own work, and then contrast that with what I actually see them doing in the Lab.
The Lab consists of a series of largely open-plan rooms with desks and carrels for graduate students and visiting researchers. Individual offices are provided for the Lab head, members of the research staff, and senior secre- tarial and administrative personnel, and there is a seminar room that doubles as a library. Most work rooms contain whiteboards, typically covered with lists and diagrams written in ink of various colors, and nearly every desk, carrel, and office is equipped with a terminal or networked personal computer linked to one of the Lab's four laser printers.
All Lab members do some of their work on these computers, work that is in a sense invisible to the observer. This presents something of a problem for the field-worker wishing to record details of their practice. Walking through the research areas, what one sees are individuals (mostly male) seated in swivel chairs in front of large computer screens, sometimes typing and sometimes staring silently at the screen. Occasionally, two or three individ- uals gather in front of a screen to confer over a problem. The products of this labor are also largely invisible, because the systems themselves are stored in the computers. They manifest themselves in the form of diagrams or seg- ments of code that appear in windows on the computer screens and are sometimes transferred to hard copy (i.e., printed out on paper).
Wondering what it means to "do AI," I have asked many practitioners to describe their own work. Their answers invariably focus on one or more of the following: problem solving, writing code, and building systems. These three tasks are seen as aspects of a single process: One writes code (i.e., pro- grams a computer) in the course of building a system, which, in turn, is seen as a means of discovering the solution to a problem. The central work of AI, then, is problem solving; writing code and building systems are strategies for accomplishing that work. One scientist said:
Every first version [of a system] ought to be a throwaway. You can work on conceptualizing the problem through the programming. . . . Generally speak- ing we don't know how to solve the problem when it's handed to us. It's handed over to us when there's no mathematical solution. . . You start solving the problem by beginning to program, and then the computer becomes your ally in the process instead of your enemy.
This view of A1 is presented consistently, not only in comments to me but also in spontaneous conversations in the Lab, in textbooks, and in lectures to graduate students.
However, we get a somewhat different picture of the work of AI if we take
an observational approach to this question, looking at what the scientists in the Lab actually spend their days doing. Lab members perform a wide range of tasks, some carried out individually and some collectively. They do write code, build systems, and solve problems. But they also do a large number of other things in the course of their workday. These tasks-many of which are not mentioned when Lab members characterize their work-include the following:
First there are meetings. Lab members spend an enormous amount of time
every week in face-to-face meetings, of which there are many sorts.
1. Lab meetings: There are regular meetings of the purpose of Lab management and for keeping up with who is doing what. These are of two sorts. First, there is a general meeting attended by the Lab head, staff researchers, and graduate students. At this meeting, everyone reports on what has been accomplished during the previous week and what is planned the coming week. Second, there is a periodic meeting of what is known as the Lab exec, which is attended by the Lab head, the staff researchers, and senior secretarial and administrative staff. At this meeting, discussions take place concerning issues of policy. 2. Project meetings: Roughly half a dozen major research projects are based in the Lab, all of which involve collaboration between Lab members and researchers from other departments within (and in some cases outside) the university. These projects meet on a regular basis in the Lab seminar room, some weekly and some less often. 3. Research seminars: Several research seminars are run from the Lab on themes central to the Lab's research agenda. 4. Formal meetings of committees and groups outside the Lab: departmental faculty meetings, interdisciplinary program meetings, meetings with division units, with the dean, and so forth. 5. Informal meetings with colleagues from the department, from other depart- ments, from elsewhere in the university and from other universities for such purposes as project development, keeping abreast of events, formulating political strategy, attempting to mediate disputes and so forth. 6. Meetings with current, former, and potential students for the purpose of guidance and recruiting. 7. Meetings in connection with work for outside institutions: These include review boards of professional societies, editorial boards, academic institu- tions, funding agencies, management and program committees of profes- sional societies, and so forth. 8. Reviewing for journals, conferences, and funding agencies. 9. Conferences and academic meetings.
In addition to all these face-to-face meetings, Lab members carry out a wide variety of other tasks, including the following:
10. Communication with colleagues, students, and others that is not face-to-face: This includes communication by letter and telephone, but consists increas- ingly of communication by means of electronic mail (E-mail). All Lab members have E-mail accounts through which they can communicate equally easily with people in the next office and with people in other countries. This
468 Science, Technology, & Human Values
communication is free to Lab members, and they spend a good deal of time at it. The head of the Lab estimates that he spends 20% of his time on E-mail. 11. Seeking funding: The laboratory runs on soft money. Only the head of the laboratory receives a full-time academic salary, and even he is responsible for raising his own summer salary in grant money. Money for part of the secre- tarial and administrative salaries and for the research scientists' and graduate students' salaries must be raised through grants. Thus a good deal of labora- tory work time is spent in investigating sources of money, in project devel- opment, in proposal writing and editing, in budgeting proposed projects, and in politicking in support of these proposals. The head of the Lab estimates that he spends at least 20% of his time in pursuit of funding. 12. Teaching courses: This includes lecture time and course preparation time (for teaching staff). 13. Taking courses: This includes lecture time and studying time (for graduate students). 14. Writing papers, editing, and preparing outside lectures. 15. Lab and departmental administration, including budgeting. 16. Clerical and administrative work: The Lab secretaries and administrators do this full-time, with occasional help from work-study students and from other secretaries in the department. In addition, Lab members do a good deal of photocopying for themselves. 17. Hardware and software maintenance and upgrading: This includes virus checking, installing new versions of operating systems and applications soft- ware on the dozen or so computers in the Lab, maintaining software compat- ibility, and so forth. 18. Personal file and directory maintenance on the computer: the computer analog of housekeeping. 19. And, finally, Lab members write code, build systems, and solve problems. Few of these tasks will be surprising to academic readers; after all, this is
a university research laboratory. There are, however, some interesting things about this list when viewed in relation to what practitioners say about what their work consists of. To begin with, there is a striking disparity between the self-description and the observational data on what these scientists do at work. Two points stand out here. First, the tasks that Lab members can be seen to do on a regular basis are far more various than their own description suggests: that self-description is highly selective. Even if we look only at the realm of activity concerned with computers, their self-description is selec- tive. Computers require backup, maintenance, and repair; software re- quires updating; compatibility must be maintained; and virus checkers must be kept up-to-date. But, although all Lab members engage in these tasks, no infor- mant has ever mentioned them as part of the work of AI.
Second, not only is the scientists' description of their work selective,
because no Lab members spend all of their time writing code and building systems, but, in at least one case, it is totally false. The head of the Lab does
Forsythe / Construction of Work in A1 469
no such work at all: His time is fully taken up with laboratory management, fund-raising, and teaching. As the following dialogue illustrates, however, he is apologetic about the fact that he does not actually do such work.
Scientist: You're getting a skewed view of A1 when you look at what I do. I'm not writing any code! Anthropologist: When was the last time you wrote some code? Scientist: Seriously, you mean, as opposed to just playing around? Eighty or '81, I suppose, for [company name], when I was doing some consulting for them. Anthropologist: So are you doing A1 when I observe you? Scientist: Well, when I'm working with [students], I think, yes. Because that involves conceptualizing, defining problems, and that's an important part of AI. But I have to leave it up to them to write the code.
This senior scientist concedes that he has not worked on code for system building for over 10 years. Yet he too clearly shares the belief that writing code and building systems are what constitute the real work of AI.
In describing their work as writing code and building systems, practition-
ers are not being absentminded. They are perfectly aware that their day includes many other pursuits besides writing code; however, they are reluc- tant to label these other pursuits as work. When I pursued this question with several informants, they all made some kind of distinction among their daily pursuits. One distinguished between "real work" and "other stuff he had to do"; another made a distinction between " w o r k and "pseudowork." "Real work" or "real AI" turns out to involve conceptualization and problem solv- ing, writing code, and building systems; "pseudo-work" or "other stuff," on the other hand, includes most other tasks, such as attending meetings, doing E-mail, recruiting faculty, looking for research funding, and writing recom- mendations. When asked what their work entailed, then, these scientists described only the real work, not the pseudowork on which they also spend a great deal of their time.
Some examples will help to clarify the boundary between real work and
pseudowork. First, not all kinds of computer use count as real work: Recall that reading and sending electronic mail was labeled pseudowork. Second, not all kinds of programming count as work, as is illustrated by the following interchange between two students.
Anthropologist [to Student 1, sitting at a terminal]: What are you doing? Student 1: Working on a spread-sheet program, actually. Student 2: [intejects from neighboring desk) Using, not working on. Student 1: Using it.
470 Science, Technology, & Human Values My informants distinguish between using or working with apiece of software and working on it. Student 1 was working with the spreadsheet as a user but was not changing the code that defines the underlying spreadsheet platform. The students agreed that this operation should be called using the program, not working on it. Because using a spreadsheet generally involves doing some programming to customize it to one's purposes, this is not a distinction between programming and not-programming. Rather, it refers to what is being programmed. They apply the term working on to programming that modifies the basic code. What they are doing, then, is restricting their use of work to what they think of as real AI: building systems.
Third, other aspects of the boundary between work and pseudowork are
evident in the distinction Lab members make between doing and talking. Work is doing things, not talking about doing them; talking about things is not doing work. This distinction was illustrated during a talk with the Lab head about a large interdisciplinary project based in the Lab. When I expressed concern that project members seemed to have committed them- selves to a conceptual position that might turn out to be counterproductive in the long run, the Lab head replied in astonishment, "But we haven't done any work yet!" At this point, the project in question had been meeting regu- larly for about 1 year. From an anthropological point of view, a great deal of work had already been done by project members, including reading, discuss- ing, writing two major proposals, canying out preliminary fieldwork in two medical clinics, and beginning the process of formal knowledge elicitation for a medical expert system. What no one had yet done, however, was to write code for the prospective system. Because to A1 specialists the central mean- ing of work is writing code and building systems, no real work had yet taken place on the project.
Lab members clearly categorize activities differently from anthropolo-
gists, for whom developing a common conceptual approach to a research project (thinking and talking about doing) would certainly count as real work. Because the Lab members see themselves as paid to build systems, general discussions of epistemological issues that do not relate to specific systems under construction are seen as a waste of time. Such discussions might well be fun (one informant characterized them as play), but they do not count as work.
If my work is to build systems, and I'm talking about epistemological issues, there would be a question in my mind about whether I was doing my work or wasting time. . . . [There] must be some guilt there.
Discussing the topic of this article, a Lab member grinned and explained his own philosophy (partly in jest) as, "Just shut your mouth and do the work!"
Forsythe / Construction of Work in A1 471
He wrote these words on the nearest whiteboard. Knowing that I had been learning LISP, the computer language in which Lab members build their systems, he then playfully translated the messageinto LISPcode: (AND [shut mouth] [do work]).
What should we understand by the selectivity in the scientists' use of the
term work in describing their daily activities in the Lab? What is the signifi- cance of their consistent division of these activities into legitimate and ille- gitimate forms of work? I suggest that this selectivity is meaningful and that it conveys some important things about Lab members' implicit assumptions.
First, the way in which they construct their work reveals their thorough-
going technical orientation, which I've referred to elsewhere as their "engi- neering ethos" (Forsythe forthcoming). In describing their work as problem solving, Lab members could be referring to many of the things they regularly do in the Lab; the list given above includes numerous tasks that could be described as problem solving. However, what they actually mean by that term is formal or technical problem solving. This does not include resolving social, political, or intefactional problems, although these too arise in the course of their workday. Thus the problem solving that these scientists define as their work is formal, bounded, and very narrowly defined. Problem solving in this sense is what you do while sitting alone for hours at the computer. By adopting this description of their own work, the scientists are deleting the social, to use Star's evocative phrase (Star 1991). This can be seen in two senses. The focus on writing code and building systems leaves out the fact that a significant proportion of their daily professional activities are actually social in nature-recall all those meetings and seminars. Furthermore, even writing code and building systems take place in a social and institutional context, necessitating interaction with fellow project members, experts, funders, and so on. Thus the scientists' description of their own work is highly decontextualized.
This selective description reflects an idealized representation of their
work. The practitioners are describing what they see as the essence of their work, which is also the part they most enjoy doing. Their construction of their work thus focuses not on what they actually do all of the time but, rather, on what they would like to spend all their time doing if only the unpleasant demands of everyday life took care of themselves. From an anthropological standpoint, these scientists seem strikingly casual about the distinction be- tween the ideal and the real. Asked a question about what their work entails, Lab members consistently respond with an answer that seems far more ideal than real.
I suggest that the selectivity of their description of their own work has
symbolic meaning. The particular activities the scientists list as real work are 472 Science, Technology, & Human Values those in terms of which they construct their own identity as real AI scientists. In the Lab, possession of these skills is used on occasion as a boundary marker to distinguish between those who belong in an AI lab and those who do not. For example, one day at a Lab meeting, a practitioner opposed a suggestion of mine with the comment, "Anyone who can't program Unix or figure out what's wrong with a Mac shouldn't be in this lab." Now the fact is that there is an enormous overlap between what this researcher does a!l day in the Lab and what I do there as a participant observer: We attend many of the same meetings, read some of the same literature, write papers, seek funding (from some of the same agencies), talk with many of the same students and col- leagues, and spend hours every day sitting at the same type of computer. In fact, just about the only activities we do not share are writing code and building systems. Because these are the activities that define real AI, how- ever, they can also be used to define a boundary between those who belong in the Lab and those who do not.6 My point is not to object to such boundary drawing but, rather, to demonstrate that the professional activities that take place in the Lab have differential symbolic weight.
To sum up my argument thus far, practitioners can be observed to perform
a wide range of tasks during the workday. But when asked what their work consists of, they consistently mention only a few of these tasks. Work, as my informants construct it, is not about interpersonal communication or most of the other daily activities one can observe in the Lab, although all Lab mem- bers spend a large proportion of their time on these activities. Instead, work is about finding solutions to formal problems through the process of building systems. Work is thus defined in terms of isolated task performance that involves solving logical problems, but not interpersonal ones. This is why laboratory management does not strike the head of the Lab as real work.
Discussion The starting point of this investigation was the apparently simple question
"What is the work of AI?Complexity arose as it became clear that practi- tioners describe their work in a way that leaves out a great deal. These scien- tists represent their work in terms of an ideal model that seems to have sym- bolic meaning to them but that, in its focus on formal problem solving, is at best a practical representation of what their daily practice actually entails.
The long list of activities left out of this description (see above) may be
familiar to the academic reader, who may also resent administrative and other duties of the sort that A1 practitioners perform but do not consider to be real
Forsythe / Construction of Work in A1 473
work. However, we should not allow familiarity to blind us to the significance of the considerable discrepancy between the scientists' representation of their work and their observable practice. By systematically discounting a substan- tial proportion of their work, they are engaging in large-scale deletion (Star 1991,266-67). Much of what they actually do in the Lab is thereby rendered invisible, relegated to a category of unimportant or illegitimate work. Among these deleted activities are the many acts of social, intellectual, and material maintenance that lie outside the realm of formal problem solving. This systematic deletion is taken for granted in the world of expert systems: In anthropological terms, it is a part of the local culture.
The selectivity with which system builders represent their own work is
interesting from a number of standpoints. My discussion will focus on two of them. They are, first, the implications of the practitioners' concept of work for the way they go about knowledge acquisition and, second, what this may imply in turn for the way expert systems function in the real world.
Implications for Knowledge Acquisition When carrying out knowledge elicitation to gather material for the knowl-
edge base of an expert system, my informants approach the experts' work with the same selectivity that they bring to characterizing their own work. The scientists' construction of their experts' work is also partial idealized, formal, and decontextualized; it is characterized by the same deletions outlined above.7
In describing their own work, the scientists conflate the ideal and the real,
making little distinction between observable practice and verbal descriptions of that practice. Their approach to knowledge elicitation reflects the same way of thinking. The purpose of an expert system is to perform the work of a given human expert; the knowledge encoded in the system is intended to reflect what that expert actually does. However, knowledge engineering rarely involves observing experts at work. Instead, what practitioners do is to interview them-that is, they collect experts' reports of what they do at work, apparently failing to take into account that (as we have seen in the case of the system builders themselves) people's descriptions of ,'leu own work processes can be highly elective.^ In actuality, the information recorded during knowledge elicitation is even more distanced than this from observable practice: What goes into the knowledge base of an expert system is in fact the knowledge engineer's model of the expert's model of his or her own work.
In response to this argument, practitioners tend to reply that they "do"
watch the expert at work. However, what they mean by this again reflects the 474 Science, Technology, & Human Values selective view of work that I am attempting to describe. Experts are indeed often asked to do "work" for the purpose of knowledge engineering; how- ever, this usually does not mean demonstrating their normal work patterns in their customary occupational setting. Instead, what they are asked to do is to perform isolated tasks or to solve formal problems that are taken to represent that work. The experts' verbal commentary as they solve such problems is taken as data on their work performance in its normal setting. For example, for a system designed to do medical diagnosis, system builders asked doctors to read case histories and give a verbal account of how they would diagnose them; for a system designed to diagnose automotive problems, a mechanic was asked how he went about deciding what was wrong with a car; for a system to simulate student reasoning in physics, students of varying degrees of expertise were given problems to solve from an undergraduate textbook. As representations of actual work in the world, these contrived tasks are partial in precisely the same way as the scientists' representations of their own work. Because such tasks are narrow and formal-"paper puzzles" as opposed to genuine problems encountered in a real-world context-they leave out a great deal of the normal work process. In addition, because it is customary in knowledge engineering to work with only one expert per system (the physics project designated the best student problem solver as the expert), conventional elicitation procedures are likely to overlook variations in the way different skillful individuals approach the same task.
Some practitioners are aware that there are drawbacks in these conven-
tional procedures. For example a recent textbook by Scott, Clayton, and Gibson (1991) describes the limitations of relying upon experts' descriptions of their rules and heuristics for solving a problem. The authors attempt to overcome some of these limitations by suggesting that the work of repairing toasters be analyzed by having "Frank Fixit" bring a toaster to the knowledge elicitation session to demonstrate his diagnostic skills. But, even when the object "diagnosed" is actually used in the knowledge acquisition process, the work tends to be observed and analyzed out of context. The anthropological emphasis upon situated understanding (Suchman 1987) stresses the impor- tance of seeing how people approach problems in the settings in which they are accustomed to work. This is not usually done, however: Expert systems are still built without the system builders ever visiting the expert's place of work.
Approaching knowledge elicitation as they do, practitioners learn little
about the informal social and institutional processes that are an essential part of work in real-life settings. For example, faced with aproblematic diagnosis, physicians often call a colleague (Weinberg et al. 1981); if official mles and procedures do not suit the needs of aparticular patient or situation, they figure
Forsythe / Construction of Work in A1 475
out how to get around those rules.9 By relying on interviews plus data on problem solving in contrived situations, system builders acquire a decontextualized picture of the expert's work. As in the representation of their own work, they discount much of what experts ac- tually do.
To sum up, the knowledge that the scientists put into their knowledge
bases is narrow: It simply leaves out a lot. The models of behavior it includes are encoded without much information, either on the contexts in terms of which these models have meaning or on how the models are actually trans- lated into action in real-life situations. In that sense, this knowledge is brittle as well. Standard knowledge elicitation procedures reflect-and replicate- the same deletions that practitioners apply to their own work.
Implications for Expert Systems I have argued that there is a parallel between the way my informants
construct their own work and the way they construct their experts' work in the course of knowledge elicitation. In both cases, we see the influence of a formal and decontextualized notion of work. The final step of my argument is to draw the conclusion that this formal and decontextualized notion of work contributes to the construction of expert systems that are narrow and brittle- and thus vulnerable to failure when faced with the complexity of work processes in the real world. The system described by Sachs (forthcoming; see above) fell off the knowledge cliff at least in part because it incorporated a normative model of the work processes that it was designed to monitor. This idealized representation was apparently too partial and inflexible to function as intended in the real world (Sachs forthcoming). I suggest that the scientists who built this system may unintentionally have contributed to its fragility by assuming that actual work processes and contexts would conform to the idealized representations of them presumably offered by their experts. This expectation, which reflects the culture of AI, confuses the ideal and the real. (In contrast, perhaps because of their fieldwork experience, anthropol- ogists tend to make the opposite assumption. If anything, they assume that normative rules and everyday action are not related in a simple way [Forsythe forthcoming; Geertz 1973, 1983b1). I assume that the builders of this system followed the conventional knowledge elicitation procedures described above and did not systematically observe the workplaces in which their system was to be fielded. Had they done so, they would surely have discovered that problem solving in real-world contexts does not (and indeed cannot) always conform to the simplified "typical" cases offered by experts. Perhaps the system could then have been designed to "expect" deviations from the ideal. 476 Science, Technology, & Human Values
In summary, the characteristic deletions in the scientists' representation
of their own work are replicated not only in their system-building procedures but also in the technology itself. I suggest that, if today's expert systems tend to fall off the knowledge cliff, one cause may be the shallowness of the infor- mation encoded in those systems on the nature of expert work and the nature of the knowledge that experts bring to such work (Forsythe forthcoming).
Conclusion The case presented in this article supports the growing body of evidence
from the anthropology of science and technology that knowledge-based systems are far from value-free (Sachs forthcoming; Suchman 1987, 1992). Elsewhere, I (Forsythe forthcoming) have tried to show that such systems incorporate tacit assumptions about the nature of knowledge; here, I contend that they also embody assumptions about the nature of work.
There is a striking symmetry between what the practitioners seem to
assume about the world and the characteristics of the systems they build to operate in the world. They do not seem aware of this symmetry, however. Given that fundamental assumptions about the nature of knowledge and work are simply taken for granted, the process by which system builders embed these beliefs in the tools they build is unlikely to reflect intentionally. Rather, as choices arise in the system-building process, tacit beliefs influence deci- sion making by default, in a way that anthropologists describe as cultural. Intentional or not, some of the practitioners' fundamental beliefs are repli- cated in the system-building process, with significant implications for the resulting technology.
The scientists I have described are unlikely to agree with this point of
view: As we have seen, they tend to blame the inflexibility of expert systems on the limitations of current technology and representational capabilities. But the connection I have tried to delineate has nothing to do with the limita- tions of technology. Rather, it concerns one of the nontechnical factors that prac- titioners often discount Porsythe and Buchanan 1992): their own tacit assump- tions, which among other things, help to shape their knowledge acquisition procedures. If they understand real work to mean only formal, technical, decontextualized.problemsolving, and if their information-gathering procedures reflect that perception, then it is hardly surprising that the expert sys- tems they produce are narrow and brittle. Ironically, through their own de- fault assumptions, these scientists may be helping to push their own systems off the knowledge cliff.
Forsythe 1 Construction of Work in A1 477 Notes 1. Rule-based expert systems and knowledge engineering do not constitute the entire field
of AI. However, because my participant observation has taken place mainly within that world, my generalizations should be understood in that context. One major alternative approach to system building within A1 (as opposed to such other research areas as robotics) is defined by the system architecture known variously as connectionism, neural nets, or PDP (parallel and distributed processing). Adherents of these two approaches compete for funding and for the power to define what should count as real AI.
2. Expert systems are a subset of the broader category of knowledge-based systems. Further
information on expert systems is provided in the following sources: Davis (1989a, 1989b); Harmon and King (1985); Hayes-Roth, Waterman, and Lenat (1983); Johnson (1986); Scott, Clayton, and Gibson (1991); and Waterman (1986).
3. This metaphor is generally attributed to E. A. Feigenbaum. 4. See Gasser (1986) for discussion of this and other ways in which users "manage"
computer systems to work around design problems.
5. The study of the meaning of information was conducted with special reference to the way
this term is constructed in medical informatics, a field that lies at the intersection of computer science, information science, and medicine.
6. On other occasions in the course of my fieldwork, I have encountered further examples
of the use of particular technical skills as boundary markers. In three of the laboratories in which I have done research, I was assigned a computer terminal or a networked personal computer at which to work. In each case, knowledge of the local operating system (different in each laboratory) seemed to convey a message about belonging in some way to the lab. Informants became discernibly more friendly and helpful once 1had demonstrated the ability to use the local E-mail and text-editing software with some degree of facility. The text editors of relevance here are those that (like Emacs and vi) are necessary for system hacking, that is, working on large computer-operating systems. Knowledge of such user-friendly personal word processors as Word or MacWrite does not cany the same symbolic meaning.
7. For further anthropological analysis of the knowledge elicitation process, see Forsythe
(forthcoming) and ~orsythe d Buchanan (1989). or discussion of knowledge elicitation from
i
a different point of view, see Collins (1990). My reactions to Collins's discussion of knowledge engineering are given in Forsythe (forthcoming).
8. A similar observation is reported in a case study by Blomberg and Henderson (1990).
They note that, even in a participatory design project explicitly intended to involve users in the design of a computer-based system, designers of the computer interface ended up relying "not on seeing use, but on talking about it" (p. 357).
9. 1haveobservedthistobethecaseinthecourseof my ownongoing fieldwork in Pittsburgh
in settings in internal medicine, neurology, and emergency medicine.
References
Blomberg, J., and A. Henderson. 1990. Reflections on participatory design: Lessons from the
Trilliumexperience. In Empoweringpeople: Proceedings of CHI 1990, edited by J. Carrasco Chew and J. Whiteside, 353-59. New York: ACM Press.
478 Science, Technology, &Human Values Buchanan, B., and E. Shortliffe, Eds. 1984. Rule-based expert systems. Readicg, M A : Addison-
Wesley.
Collins, H. 1990. Artificial experts: Social knowledge and intelligent machines. Cambridge:
MIT Press.
Davis, R. 1989a. Expert systems: How far can they go? Part 1. A1 Magazine 1q1): 61-67.
. 1989b. Expert systems: How far can they go? Part 2. A1 Magazine 1q2): 65-67.
Forsythe, D. E. 1992. Blaming the user in medical informatics: The cultural nature of scientific
practice. Knowledge and Society 9:95-111. . Forthcoming. Engineering knowledge: The construction of knowledge in artificial intelligence. Social studies of Science.
Forsythe, D. E., and B. G. Buchanan. 1989. Knowledge acquisition for expert systems: Some
pitfalls and suggestions. IEEE Transactions on Systems, Man and Cybernetics 19(3): 435-42. . 1991. Broadening our approach to evaluating medical information systems. In Sympo- sium on Computer Applications in Medical Care (SCAMC91),edited by P. D. Clayton, 8-12. New York: McGraw-Hill. . 1992. Non-technical problems in knowledge engineering: Implications for project management. Expert Systems with Applications 5:203-12.
Forsythe, D. E., B. G. Buchanan, J. A. Osheroff, and R. A. Miller. 1992. Expanding the concept
of medical information: An observational study of physicians'inforrnation needs. Computers and Biomedical Research 25(2): 181-200.
Gasser, L. 1986. The integration of computing and routine work. ACM Transactions on Ofice
Information Systems 4(3): 205-25.
Geertz, C. 1973. The interpretation of cultures. New York: Basic Books.
. 1983a. Common sense as a cultural system. In Local knowledge: Further essays in interpretive anthropology, edited by C. Geertz, 73-93. New York: Basic Books. . 1983b.Lucal knowledge: Further essays in interpretive anthropology. New York: Basic Books.
Guha, R. V,. and D. B. Lenat. 1990. CYC: A midterm report. A1 Magazine 11(3): 32-59. Harmon, P., and D. King. 1985. Expert systems. New York: Wiley. Hayes-Roth, F., D. Waterman, and D. Lenat, eds. 1983. Building expert systems. Reading, MA:
Addison-Wesley.
Hess, D. 1992. Introduction: The new ethnography and the anthropology of science and tech-
nology. In Knowledge and society: The anthropology of science and technology, edited by D. Hess and L. Layne, 1-26. Greenwich, CT: JAI.
Johnson, G. 1986. Machinery of the mind. Inside the new science of artificial intelligence. New
York: Times Books.
Karp, P. D., and D. C. Wilkins. 1989. An analysis of the distinction between deep and shallow
expert systems. International Journal of Expert Systems l(2): 1-32.
Knorr-Cetina, K. D. 1981. The manufacture of knowledge: An essay on the constructivist and
contextual nature of science. Elmsford, NY: Pergamon.
Latour, B., and S. Woolgar. 1986. Laboratory life: The construction of scientific facts. Princeton,
NJ: Princeton University Press.
Lenat, D. B., and R. V. Guha. 1990. Building large knowledge-based systems: Representation
and inference in the CYC Project. Reading, MA: Addison-Wesley.
Lenat, D. B., and E. Feigenbaum. 1987. On the thresholds of knowledge. In Proceedings of the
Tenth International Joint Conference on Artificial Intelligence (IJCAI-87), edited by John McDermott, 1173-82. Los Altos, CA: Morgan Kaufmann.
Lenat, D. B., M. Prakash, and M. Shepherd. 1986. Cyc: Using common sense knowledge to
overcome brittleness and knowledge acquisition bottlenecks. A1 Magazine 6(4): 65-85. Forsythe I Construction of Work in A1 479
Lynch, M. (1985). Art and artifact in laboratory science: A study of shop work and shop talk in
a research laboratory. Boston: Routledge & Kegan Paul.
McCarthy, J. 1984. Some expert systems need common sense. Annals of the New York Academy
of Sciences 426:817-25.
Nyce, J., and G. Bader. Forthcoming. Hierarchy, individualism and hypermediain two Amencan
high schools. In Informatiomoverjlod och Manskliga Dilemman, edited by L. Ingelstam. Stockholm: Carlsson.
Pfaffenberger, B. 1992. The social anthropology of technology. Annual Review of Anthropology
21:491-516.
Sachs, P. Forthcoming. Thinking through technology: The relationship of technology and
knowledge at work. Information, Technology and People.
Scott, A,, J. Clayton, and E. Gibson. 1991. Apracticalguide to knowledge acquisition. Reading,
MA: Addison-Wesley.
Star, S. L. 1991. The sociology of the invisible: The primacy of work in the writings of Anselm
Srauss. In Social organization and social process: Essays in honor of Anselm Strauss, edited by D. Maines, 265-83. New York: Aldine de Gmyter.
Suchrnan, L. 1987. Plans and situated actions. Cambridge: Cambridge University Press.
. 1992. Technologies of accountability: On lizards and airplanes. In Technology in working order: Studies of work interaction and technology, edited by G . Button, 113-26. London: Routledge.
Traweek, S. 1988. Beamtimes and lifetimes: The world of high energy physicists. Cambridge:
Harvard University Press.
Waterman, D. 1986.A guide to expert system. Reading, MA: Addison-Wesley. Weinberg, A., L. Ullian, W. Richards, and P. Cooper. 1981. Informal advice and information-
seeking between physicians. Journal of Medical Education 56: 174-80. Diana E. Forsythe is Research Associate Professor of Computer Science andAnthropo1- ogy at the University of Pittsburgh Department of Computer Science, Pittsburgh, PA 15260. A cultural anthropologist, she has been a participant observer in artificial intelligence and medical informatics since 1986. Her research investigates cultural aspects of scientific practice, with particular attention to the ways in which scientists' own values andassumptions become embedded in technology. She has published recently in Social Studies of Science, Knowledge and Society, Computers and Biomedical Research, IEEE Transactions on Systems, Man and Cybernetics, and Annals of Internal Medicine.
LINKED CITATIONS - Page 1 of 1 -
You have printed the following article:
The Construction of Work in Artificial Intelligence Diana E. Forsythe Science, Technology, & Human Values, Vol. 18, No. 4. (Autumn, 1993), pp. 460-479. Stable URL: http://links.jstor.org/sici?sici=0162-2439%28199323%2918%3A4%3C460%3ATCOWIA%3E2.0.CO%3B2-T
This article references the following linked citations. If you are trying to access articles from an off-campus location, you may be required to first logon via your library web site to access JSTOR. Pleas visit your library's website or contact a librarian to learn about options for remote access to JSTOR. References
Social Anthropology of Technology Bryan Pfaffenberger Annual Review of Anthropology, Vol. 21. (1992), pp. 491-516. Stable URL: http://links.jstor.org/sici?sici=0084-6570%281992%292%3A21%3C491%3ASAOT%3E2.0.CO%3B2-2