Dick Thornburgh and Herbert S. Lin, Editors

Committee to Study Tools and Strategies for Protecting Kids from Pornography and Their Applicability to Other Inappropriate Internet Content

Computer Science and Telecommunications Board

National Research Council



Executive Summary



The Internet is both a source of promise for our children and a source of concern. The Internet provides convenient access to a highly diverse library of educational resources, enables collaborative study, and provides opportunities for remote dialog with subject-matter experts. It provides information about hobbies and sports, and it allows children to engage with other people on a near-infinite variety of topics. Through online correspondence, their circles of friendship and diversity of experience can achieve a rich and international scope.

Yet press reports have suggested to many that their children are vulnerable to harm on the Internet. While only a small fraction of material on the Internet could reasonably be classified as inappropriate for children, that small fraction is highly visible and controversial.1 If the full educational potential of the Internet for children is to be realized, such concerns must be reasonably addressed.

At the request of the U.S. Congress in 1998, the Computer Science and Telecommunications Board of the National Research Council assembled a committee with expertise in many fields. Based on a wide range of information sources as well as the committee's own expertise, this report seeks to frame the problem in a legal, educational, technological, social, and societal context and to provide information useful to various decision-making communities--e.g., parents, the information technology industry, school boards, librarians, and government at all levels--about possible courses of action to help children be safer in their use of the Internet.


DEFINITIONAL CONSIDERATIONS IN PROTECTING CHILDREN FROM INTERNET PORNOGRAPHY

The term "pornography" lacks a well-defined meaning. To be sure, broad agreement may be found that some materials are or are not "pornographic," but for other materials, individual judgments about what is or is not "pornography" will vary. In recognition of this essential point, the report uses the term "inappropriate sexually explicit material" to underscore the subjective nature of the term.

The term "child" is also problematic. From birth to the age of legal emancipation covers a very wide developmental range. What is inappropriate for a 6-year-old to see may not be inappropriate for a 16-year-old to see, and in particular, older high school students have information needs for education that are very different from those of elementary school students.

Finally, "protection" is an ambiguous term. For example, does "protection" include preventing a child from obtaining inappropriate material (sexual or otherwise) even when he or she is deliberately seeking such material? Or, does it mean shielding a child from inadvertent exposure? Or, does it entail giving the child tools to cope effectively with exposure to inappropriate material if he or she should come across it? These scenarios pose conceptually different problems to solve.

All of these ambiguities complicate enormously the debate in communities about the nature of the problem and what might or should be done about it.


SEXUALITY IN MEDIA

The fact that children can sometimes see--and even sometimes seek out--images of naked people is not new. However, compared to other media, the Internet has characteristics that make it harder for adults to exercise responsible supervision over children's use of it. A particularly worrisome aspect of the Internet is that inappropriate sexually explicit material can find its way onto children's computer screens without being actively sought. Further, it is easy to find on today's Internet not only images of naked people, but also graphically depicted acts of heterosexual and homosexual intercourse (including penetration), fellatio, cunnilingus, masturbation, bestiality, child pornography, sadomasochism, bondage, rape, incest, and so on. While some such material can be found in sexually explicit videos and print media that are readily available in hotels, video rental stores, and newsstands, other sexually explicit material on the Internet is arguably more extreme than material that is easily available through non-Internet media.

The Internet also enables many strangers to establish contact with children. While many interactions between children and strangers can be benign or even beneficial (e.g., a student corresponding with a university scientist), strangers can also be child predators and sexual molesters. Face-to-face contact with such individuals may be traumatic and even life-threatening for a child; for this reason, Internet-based interaction (which include chat rooms, instant messages, and e-mail dialogs, and which could involve the transmission of sexually explicit material as one component) that can lead to face-to-face contact poses a greater potential danger to children than does the passive receipt of material--even highly inappropriate material--per se. The anonymity and interaction-at-a-distance of using the Internet prevent a child from using cues that arise from face-to-face interaction to help judge another's intent (e.g., gestures, tone of voice, age).


THE LEGAL CONTEXT

The legal context for sexually explicit material is driven by the First Amendment to the Constitution, and three categories of sexually explicit material are subject to government regulation. Obscenity is sexually explicit material that violates contemporary community standards in certain specified ways. (How the appropriate "community" is defined is a matter of great uncertainty, especially in an Internet context.) Child pornography is material that depicts a child engaged in a sexual act or "lewd" exhibition of his or her genitals. Obscenity and child pornography enjoy no First Amendment protection. A third category of sexually explicit material that is not obscene and not child pornography can be "obscene for minors"; such material may be regulated for minors but must be freely available to adults.


NEW TECHNOLOGY, DIFFERENT ECONOMICS

Searching the Internet for information is generally enabled by "search engines" that accept a few user-typed terms and return to the user links to Web pages that refer to those terms. A search engine can be used to find information on science, sports, history, and politics, as well as sexually explicit material. Furthermore, because of ambiguities in language (e.g., "beaver" has both sexual and non-sexual connotations), a search will sometimes return links to material that is not related to what the user is trying to find. In some cases, that unrelated material will contain sexually explicit content when it was not sought.

A second common use of the Internet is to communicate with others. However, the Internet is designed in such a way that it transports bits of information without regard for the meaning or content of those bits. Thus, Internet traffic can contain a letter to one's aunt, a chat about sports, a draft manuscript for a report, or sexually explicit images. Furthermore, controlling traffic demands special effort at the sending and/or receiving points.

The Internet is also a highly anonymous medium. Such anonymity can be advantageous for a teenager who finds answers on the Internet to questions that he or she is too embarrassed to ask an adult. It can also be disadvantageous, in that someone can conduct antisocial or criminal activities (e.g., child sexual solicitation) with less fear of identification and/or sanction than might be true in the physical world.

Information technology drives the economics of information on the Internet. Because information can be represented in digital form, it is very inexpensive to send, receive, and store. Thus, for a few hundred dollars to cover the cost of a digital camera and a Web site, anyone can produce sexually explicit content and publish it on the Web for all to see. Furthermore, because the Internet is global, regulatory efforts in the United States aimed at limiting the production and distribution of such material are difficult to apply to foreign Web site operators.

Sources of inappropriate sexually explicit material on the Internet are commercial and non-commercial. The commercial source is the online adult entertainment industry, which generates about a billion dollars a year in revenue from paying adults. (For comparison, the adult entertainment industry as a whole generates several billion dollars a year--perhaps as much as $10 billion.) U.S. business entities in the industry support around 100,000 sites (globally, there are about 400,000 for-pay adult sites). Globally, sexually explicit Web pages constitute a few percent of the 2+ billion publicly accessible Web pages as of this writing.

For many online adult entertainment firms, profitability depends on drawing a large volume of traffic in a search for paying customers, and many seek revenue through the sale of advertising that typically makes no effort to differentiate between adults and children. Further, the aggressive marketing campaigns that firms need to stand out in a highly saturated market--where margins are inherently low and therefore traffic is critical to economic survival--inevitably reach both minors and adults. The exposure of minors to such material is thus a side effect of the effort to reach large numbers of paying customers.

To date, public debate has focused largely on commercial dimensions of inappropriate sexually explicit material on the Internet. But there are many non-commercial sources of inappropriate sexually explicit material on the Internet, including material available through peer-to-peer file exchanges, unsolicited e-mail, Web cameras, and sexually explicit conversation in chat rooms. Solutions that focus only on commercial sources will therefore not address the entire problem.


THE IMPACT OF SEXUALLY EXPLICIT MATERIAL ON CHILDREN

Perhaps the most vexing dimension of dealing with children's exposure to sexually explicit material on the Internet is the lack of a clear scientific consensus regarding the impact of such exposure. Nonetheless, people have very strong beliefs on the topic. Some people believe that exposure to certain sexually explicit material is so dangerous to children that even one exposure to it will have lasting harmful effects. Others believe that there is no evidence to support such a claim and that the impact of exposure to such material must be viewed in the context of a highly sexualized media environment.

It is likely that individuals on both sides of the issue could reach agreement on the undesirability of exposing children to depictions of the most extreme and most graphic examples of sexual behavior, in the sense that most individual parents on each side would prefer to keep their children away from such material. The committee concurs, in the sense that it believes that there is some set of depictions of extreme sexual behavior whose viewing by children would violate and offend the committee's collective moral and ethical sensibilities, though this sentiment would not be based on scientific grounds. However, protagonists in the debate would be likely to part company on whether material that is less extreme in nature is inappropriate or harmful: such material might include information on sexual health, the depiction of non-traditional "scripts" about how people can interact sexually, and descriptions of what it means to be lesbian or homosexual in orientation.

Extreme sexually explicit imagery to create sexual desire on the one hand, and responsible information on sexual health on the other, are arguably unrelated and, many would contend, easily distinguished. But much content is not so easily categorized. While some extreme sexually explicit material meets legal tests for obscenity (and therefore does not enjoy First Amendment protection), less extreme material may not--and material described in the previous paragraph, lingerie advertisements, and models in swimsuits generally do enjoy First Amendment protection, at least for adults and often for children.

In short, sexually oriented content that falls outside of the realm of extreme sexually explicit imagery is likely to be the source of greatest contention, and there are arguments about whether such content would be subject to regulatory efforts aimed at reducing the exposure of minors to material that is or may be sexual in nature.


PATHS OF EXPOSURE

Children may be exposed to inappropriate Internet material or experiences through a variety of channels, including Web pages, e-mail, chat rooms, instant messages, Usenet newsgroups, and peer-to-peer file-sharing connections. Furthermore, the exposure may be sought by the child (i.e., deliberate) or unsought by the child (i.e., inadvertent), and there are many forms of each kind of exposure. An example of deliberate exposure occurring is when a child searches for sexually explicit terms in a search engine and clicks on the links returned. An example of inadvertent exposure occurring is when a child receives unsolicited e-mail containing sexually explicit material or links to such material.


IDENTIFYING INAPPROPRIATE MATERIAL

Three methods can be used to identify inappropriate material. Whether machine or human, the agent that makes the immediate decision about the appropriateness of content can do so based on its specific content, rely on a tag or label associated with the material, or examine the source of the material (or a combination of these factors).

In practice, the volume of material on the Internet is so large that it is impractical for human beings to evaluate every discrete piece of information for inappropriateness. Moreover, the content of some existing Web pages changes very quickly, and new Web pages appear at a rapid rate. Thus, identifying inappropriate material must rely either on an automated, machine-executable process for determining inappropriate content or on a presumption that everything that is not explicitly identified by a human being as appropriate is inappropriate. An approach based on machine-executable rules abstracted from human judgments inevitably misses nuances in those human judgments, which reduces the accuracy of this approach compared to that of humans, while the presumption-based approach necessarily identifies a large volume of appropriate material as inappropriate.

All mechanisms for determining if material is appropriate or inappropriate will make erroneous classifications from time to time. But note that such misclassifications are fundamentally different from disagreement over what is inappropriate. Misclassifications are mistakes due to factors such as inattention on the part of humans or poorly specified rules for automated classification. They will inevitably occur, even when there is no disagreement over the criteria for inclusion in various categories. In contrast, disagreements over what is appropriate result from differences in judgment--Person A says ,"That material is inappropriate" and Person B says of the same material, "That material is not inappropriate." Both of these issues exacerbate the problem of putting into place a systematic way to protect children.


CONCEPTS OF PROTECTION

Whether protection is based on law, technology, or education, it generally involves some combination of the following concepts:

  • Restricting a minor to appropriate material through techniques that give a minor access only to material that is explicitly judged to be appropriate;
  • Blocking inappropriate material through techniques that prevent a minor from being exposed to inappropriate material;
  • Warning a minor of impending exposure to inappropriate material or suggesting appropriate material, leaving him or her with an explicit choice to accept or decline a viewing;
  • Deterring the access of minors to inappropriate material by detecting access to such material and imposing a subsequent penalty for such access;
  • Educating a minor about reasons not to access inappropriate material in order to inculcate an internal sense of personal responsibility and to build skills that make his or her Internet searches less likely to turn up inappropriate material inadvertently;
  • Reducing the accessibility of inappropriate material so that inappropriate material is harder for minors to find;
  • Reducing the appeal of deliberate contact with inappropriate material by making access to the material (and only such material) more difficult, cumbersome, and inconvenient; and/or
  • Helping a minor to cope with the exposure to inappropriate material that will most likely occur at least occasionally with extended Internet use.

All of these concepts have costs and benefits. Any party seeking to decide on an appropriate mix of approaches based on these concepts must consider the extent and nature of physical, emotional, developmental, social, ethical, or moral harm that it believes arises from exposure to inappropriate material or experiences. Greater costs may be justifiable if the presumed harm is large and highly likely, or if young children rather than youth in late adolescence are involved.

Differing institutional missions must also be considered. A public school serves the primary purpose of providing academic instruction for individuals that have not attained the age of majority. By contrast, a public library serves the primary purpose of providing a broad range of information to the entire community in which it is based, including children and adults, and the information needs of the community--taken as a whole--are generally much more diverse than those of children and youth in school. Thus, it is not surprising that schools and libraries have different needs and might take different approaches in seeking to protect children and youth from inappropriate Internet material and experiences.


APPROACHES TO PROTECTION

Public Policy

Public policy to affect the supply of inappropriate sexually explicit material can operate to make such material less available to children. For practical and technical reasons, it is most feasible to seek regulation of commercial sources of such material--because these seek to draw attention to themselves (and non-commercial sources generally operate through private channels). Public policy can provide incentives for the adult online industry to take actions that better deny children's access to their material and to some extent to reduce the number of providers of such material.

Public policy can go far beyond the creation of statutory punishment for violating some approved canon of behavior to include shaping the Internet environment in many ways. For example, public policy can be used to reduce uncertainty in the regulatory environment; promote media literacy and Internet safety education (including development of model curricula, support of professional development for teachers on Internet safety and media literacy, and encouraging outreach to educate parents, teachers, librarians, and other adults about Internet safety education issues); support development of and access to high-quality Internet material that is educational and attractive to children in an age-appropriate manner; and support self-regulatory efforts by private parties.


Social and Educational Strategies

Social and educational strategies are intended to teach children how to make wise choices about how they behave on the Internet and to take control of their online experiences: where they go; what they see; what they do; who they talk to. Such strategies must be age-appropriate if they are to be effective. Further, such an approach entails teaching children to be critical, skeptical, and self-reflective of the material that they are seeing.

An analogy is the relationship between swimming pools and children. Swimming pools can be dangerous for children. To protect them, one can install locks, put up fences, and deploy pool alarms. All of these measures are helpful, but by far the most important thing that one can do for one's children is to teach them to swim.

Perhaps the most important social and educational strategy is responsible adult involvement and supervision. Peer assistance can be helpful as well, as many youth learn as much in certain areas from peers or near-peers (e.g., siblings) as they do from parents, teachers, and other adult figures. Acceptable use policies in families, schools, libraries, and other organizations provide guidelines and expectations about how individuals will conduct themselves online, thus providing a framework within which children can become more responsible for making good choices about the paths they choose in cyberspace, thereby learning skills that are relevant and helpful in any venue of Internet usage.

Internet safety education is analogous to safety education in the physical world, and may include teaching children how sexual predators and hate group recruiters typically approach young people, how to recognize impending access to inappropriate sexually explicit material, and when it is risky to provide personal information online. Information and media literacy provide children with skills in recognizing when information is needed and how to locate, evaluate, and use it effectively, irrespective of the media in which it appears, and in critically evaluating the content inherent in media messages. A child with these skills is less likely to stumble across inappropriate material and more likely to be better able to put it into context if and when he or she does.

The greater availability of compelling, safe, and educational Internet content that is developmentally appropriate, educational, and enjoyable material on a broad range of appealing or helpful topics (including but not limited to sex education) would help to make some children less inclined to spend their time searching for inappropriate material or engaging in inappropriate or unsafe activities. Greater availability entails both the development of new appropriate content, as well as portals and Web sites designed to facilitate easy access to existing appropriate content.

Public service announcements and media campaigns could help to educate adults about the need for Internet safety and about the nature and extent of dangers on the Internet. Such campaigns are best suited for relatively simple messages (e.g., "be aware of where your child is on the Internet" and "ask for parental controls when you subscribe to an Internet service provider").

Social and educational strategies focus on the nurturing of personal character, the development of responsible choice, and the strengthening of coping skills. Because these strategies locate control in the hands of the youth targeted, children have opportunities to exercise some measure of choice--and as a result some children are likely to make mistakes as they learn to internalize the object of these lessons.

These strategies are not inexpensive, and they require tending and implementation. Adults must be taught to teach children how to make good choices on the Internet. They must be willing to engage in sometimes-difficult conversations. They must face the trade-offs inevitable with pressing schedules of work and family. These strategies do not provide a quick fix. But in addition to teaching responsible behavior and coping skills for when a child encounters inappropriate material and experiences on the Internet, they are relevant to teaching children to think critically about all kinds of media messages, including those associated with hate, racism, senseless violence, and so on; to conduct effective Internet searches for information and to navigate with confidence; and to make ethical and responsible choices about Internet behavior--and about non-Internet behavior as well.


Technology-based tools

A wide array of technology-based tools are available for dealing with inappropriate Internet material and experiences. Filters--systems or services that limit in some way the content to which users may be exposed--are the most-used technology-based tool. All filters suffer from both false positives (overblocking) and false negatives (underblocking). However, filters can be highly effective in reducing the exposure of minors to inappropriate content if the inability to access large amounts of appropriate material is acceptable. Teachers and librarians most commonly reported that filters served primarily to relieve political pressure on them and to insulate them from liability (suggesting that filter vendors are more likely to err on the side of overblocking than on underblocking). In addition, filters reduced the non-productive demands on teachers and librarians who would otherwise have to spend time watching what students and library patrons were doing. Note also that filters can be circumvented in many ways, the easiest way being to obtain unfiltered Internet access in another venue (e.g., at home).

Monitoring of a child's Internet use is another technology-based option. Many monitoring options are available (e.g., remote viewing of what is on a child's screen, logging of keystrokes, recording of Web pages that he or she has visited)--and each of these options can be used surreptitiously or openly. Surreptitious monitoring cannot deter deliberate access to inappropriate material or experiences, and raises many concerns about privacy (for example, in a family context, it raises the same questions as reading a child's diary or searching his or her room covertly). Furthermore, while it probably does provide a more accurate window into what a child is doing online compared to the lack of monitoring, it presents a conflict between taking action should inappropriate behavior be discovered and potentially revealing the fact of monitoring.

The major advantage of monitoring over filtering is that it leaves the child in control of his or her Internet experiences, and thus provides opportunities for the child to learn how to make good decisions about Internet use. However, this outcome is likely only if the child is subsequently educated to understand the nature of the inappropriate use and is reinforced in the desirability of appropriate use. If, instead, the result of detecting inappropriate use is simply punishment, the result is likely to be behavior motivated by fear of punishment--with the consequence that when the monitoring is not present, inappropriate use may well resume. Clandestine monitoring may also have an impact on the basic trust that is a foundation of a healthy parent-child relationship.

Age verification technologies (AVTs) seek to differentiate between adults and children in an online environment. A common AVT is a request for a valid credit card number. Credit cards have some meaningful effectiveness in separating children from adults, but their effectiveness will decline as credit-card-like payment mechanisms for children become more popular. Other AVTs can provide higher assurance of adult status, but almost always at the cost of greater inconvenience to legitimate users.

A number of other technology-based tools are discussed in the main report.


OVERALL CONCLUSIONS

Contrary to statements often made in the political debate, the issue of protecting children from inappropriate sexually explicit material and experiences on the Internet is very complex. Individuals have strong and passionate views on the subject, and these views are often mutually incompatible. Different societal institutions see the issue in very different ways and have different and conflicting priorities about the values to be preserved. Different communities--at the local, state, national, and international levels--have different perspectives. Furthermore, the technical nature of the Internet has not evolved in such a way as to make control over content easy to achieve.

There is no single or simple answer to controlling the access of minors to inappropriate material on the Web. To date, most of the efforts to protect children from inappropriate sexually explicit material on the Internet have focused on technology-based tools such as filters and legal prohibitions or regulation. But the committee believes that neither technology nor policy can provide a complete--or even a nearly complete--solution. While both technology and public policy have important roles to play, social and educational strategies to develop in minors an ethic of responsible choice and the skills to effectuate these choices and to cope with exposure are foundational to protecting children from negative effects that may result from exposure to inappropriate material or experiences on the Internet.

Technology can pose barriers that are sufficient to keep those who are not strongly motivated from finding their way to inappropriate material or experiences. Further, it can help to prevent inadvertent exposure to such materials. But, as most parents and teachers noted in their comments to the committee, those who really want to have access to inappropriate sexually explicit materials will find a way to get them. From this point, it follows that the real challenge is to reduce the number of children who are strongly motivated to obtain inappropriate sexually explicit materials. This, of course, is the role of social and educational strategies.

As for public policy, the international dimension of the Internet poses substantial difficulties and makes a primary reliance on regulatory approaches unwise. Absent a strong international consensus on appropriate measures, it is hard to imagine what could be done to persuade foreign sources to behave in a similar manner or to deny irresponsible foreign sources access to U.S. Internet users.

This is not to say that technology and policy cannot be helpful. Technology-based tools, such as filters, provide parents and other responsible adults with additional choices as to how best to fulfill their responsibilities. Law and regulation can help to shape the environment in which these strategies and tools are used by reducing at least to some extent the availability of inappropriate sexually explicit material on the Internet, for example, by creating incentives and disincentives for responsible business behavior. Moreover, developments in technology can help to inform and support policy choices, and public policy decisions necessarily affect both technology and the nature and shape of parental guidance. In concert with appropriate social and educational strategies, both technology and public policy can contribute to a solution if they are appropriately adapted to the many circumstances that will exist in different communities. In the end, however, values are closely tied to the definitions of responsible choice that parents or other responsible adults wish to impart to their children, and to judgments about the proper mix of education, technology, and policy to adopt.

Though some might wish otherwise, no single approach--technical, legal, economic, or educational--will be sufficient. Rather, an effective framework for protecting our children from inappropriate materials and experiences on the Internet will require a balanced composite of all of these elements, and real progress will require forward movement on all of these fronts.


Notes

1 For purposes of this report, "material" refers to that which may be seen or read (e.g., images, movies, or text on a Web page), while "experiences" are interactive (e.g., talking to a stranger through instant messages or chat rooms). E-mail sent or received that is essentially advertising is "material," while a sequence of interactive e-mails corresponds to "experiences."











Buy this book

Buy this book

Copyright 2002 by the National Academy of Sciences
Previous Table of Contents Next