Thursday, February 12, 2015

12. Advanced Course in Information Storage and Retrieval - I P- 06. Information Storage and Retrieval

इस ब्लॉग्स को सृजन करने में आप सभी से सादर सुझाव आमंत्रित हैं , कृपया अपने सुझाव और प्रविष्टियाँ प्रेषित करे , इसका संपूर्ण कार्य क्षेत्र विश्व ज्ञान समुदाय हैं , जो सभी प्रतियोगियों के कॅरिअर निर्माण महत्त्वपूर्ण योगदान देगा ,आप अपने सुझाव इस मेल पत्ते पर भेज सकते हैं - chandrashekhar.malav@yahoo.com

12. Advanced Course in Information Storage and Retrieval - I


P- 06. Information Storage and Retrieval

By :Dr P.M Devika ,Paper Coordinator

Home
 Content
    1. Introduction 
  Collapse  2. Information Retrieval Standards and Protocols 
  Collapse  3. Global Digital Library 
  Collapse  Intelligent Information Retrieval 
    Intelligent hypertext and hypermedia systems 
    Hypermedia development tools 
  Collapse  User interface 


1. Introduction

The advancement of Information and Communication Technology (ICT) has influenced the way of information search and retrieval process. It has brought revolution in IR.  There are many advancement has taken place.  Over the period of time,

The goal of Information Retrieval (IR) system, as we know, is to response to user's request by retrieving documents. The aim is to retrieve documents those contents match with the user's information need.

An Information Retrieval (IR) system obtains information resources relevant to a user's query from a collection of information resources. The need for an IR systems occurs when traditional cataloguing system can no longer cope. With the rapid growth of digitized unstructured information, giving access to the vast amount of information, the only solution is search and IR system has become ubiquitous.

 via the high speed networks, giving global access to enormous quantities of information, the IR systems

With the growth of digitised unstructured information and, via high speed networks, rapid global access to enormous quantities of that information, the only viable solution to finding relevant items from these large text databases was search, and IR systems became ubiquitous.
 The standard practice is, after retrieval of the documents, user examines the retrieved documents by going through the text and determines whether they are relevant or not. The standard practice is users express their information requirements through the natural language as a statement or as part of a natural language dialogue. However, as we know from our experiences often the retrieved documents do not match the user's information need. This is because of the ambiguous nature of natural languages (discussed in details in the succeeding sections).

An information retrieval (IR) system locates information that is relevant to a user’s query. An IR system
typically searches in collections of unstructured or semi-structured data (e.g. web pages, documents, images,
video, etc.). The need for an IR system occurs when a collection reaches a size where traditional cataloguing techniques can no longer cope. Similar to Moore’s law of continual processor speed increase, there has been a
consistent doubling in digital storage capacity every two years. The number of bits of information packed into a
square inch of hard drive surface grew from 2,000 bits in 1956 to 100 billion bits in 2005 [1]. With the growth of
digitised unstructured information and, via high speed networks, rapid global access to enormous quantities of
that information, the only viable solution to finding relevant items from these large text databases was search,
and IR systems became ubiquitous. 

2.1 What is a Standard?

A standard exemplifies an agreement by what means to perform something or carry out some activity to obtain a predictable results. All standards available by the National Information Standards Organization (NISO) are established by an agreement that draws on the expertise of implementers and dealers, product designers, and users of those products; they are approved by the American National Standards Institute (ANSI). There are various standards and protocols exist today. In the following sections some of the very popular query and retrieval standards and protocols, such as,  Z39.50, CQL, SRW, SRU are discussed. 

2.2 Z39.50

Z39.50 is a used both at national and international level (ISO 23950) as it is a standard that defines a protocol for computer-to-computer information retrieval. With the help of Z39.50 a user in one system can search and retrieve information from other computer systems (that have also implemented Z39.50) without having the knowledge about the search syntax that is used by those other systems. Z39.50 was originally approved by the National Information Standards Organization (NISO) in 1988.

Software and system vendors provide right to use information from a diversity of unique systems with diverse hardware, software, interfaces, and database search commands. Compounding substances for the information seeker, the Internet offers access to a overwhelming array of databases that increases at an exponential rate daily. The challenge for users becomes when they have to identify the right information painlessly in the middle of this vast array. The goal of Z39.50 is to reduce the complexity and difficulties of searching and retrieving information. It makes the life of the end-users easier as they can use the wealth of information resources on the Internet.

When one uses Z39.50-enabled systems, then in such a scenario a user in one system can try to explore the electronic information in another system without having to know how that system functions. Z39.50 operates in a client/server environment, acting as a common language that all Z39.50-enabled systems can understand. It is an Esperanto-like language that bridges the many “languages and dialects” that different information systems “speak.”  For Z39.50 communication and interoperation to take place, both the client and the server must be able to speak the Z39.50 language. Almost all Z39.50 implementations use the standard TCP/IP Internet communications protocol to connect the systems and Z39.50-compliant software to translate between them for search and retrieval.

From the users’ point of view, all the technical activities occur behind the scenes; they simply see their familiar search and display interface. To achieve this interoperability, Z39.50 standardizes the messages that clients and servers use for communication, regardless of what underlying software, systems, or platform are used. Z39.50 supports open systems, which means it is nonproprietary, or vendor independent. A client system that implements the Z39.50 protocol allows communication with diverse servers, and a server system that implements the protocol is searchable by clients developed by different vendors. Without having to know how the server works, the user performs a search through the Z39.50 interface on the client. Z39.50 governs how the client translates the search into a standard format for sending to the server. After receiving the search, the server uses Z39.50 rules to translate the search into a format recognized by the local database, performs the search, and returns the results to the user’s client. The client’s user interface software processes the results returned via Z39.50 with the goal of displaying them as closely as possible to the way records are displayed in the user’s local system. (Z39.50 A Primer on the Protocol)

2.3 CQL

CQL (the Common Query Language, http://www.loc.gov/cql/) is an abstract and extensible query language outlined to make available of maximum interoperability amongst the systems, minimizing the difficulty to learn and use, however retaining the functionality to permit complex searches. CQL was designed for use with SRW, a search protocol successor to Z39.50. Its main market presently is the bibliographic domain; nevertheless it is not restricted to this context alone. It provides standards based and tested mechanism to specify a query that may be used either internally or remotely to select records from a database. The Library of Congress bibliographic database has an SRW/CQL interface available today for all 28 million records.

By using CQL in OpenOffice, a huge amount of data can be identified on request and retrieved for assimilation within the application. In the first instance this integration can effortlessly be accomplished within the bibliographic subsystem, however in the future it would also, for example, permit standards based collaboration on documents by retrieving OpenOffice documents instead of bibliographic records from appropriate repositories. In order to retain consistency in user experience, CQL should thus also be used internally for searching the bibliographic database provided as part of the application. The user does not require using CQL directly, but the system need not necessarily treat local and remote queries differently. CQL (the Common Query Language) is an abstract and extensible query language designed to provide maximum interoperability between systems, with the minimum difficulty to learn and use, while retaining the functionality to permit complex searches.

CQL was designed to be used with SRW, a search protocol successor to Z39.50. Its prime market currently is the bibliographic domain, however it is not limited to this context alone. As such, it provides standards based and tested mechanism in order to specify a query that might be used either within the system or remotely to select records from a database. The Library of Congress bibliographic database has an SRW/CQL interface available today for all 28 million records.

By using CQL in OpenOffice, a enormous amount of data can be located on request and recovered for integration within the application. In the first instance this integration can easily be accomplished within the bibliographic subsystem, however at some point it would also, for example, allow standards based collaboration on documents by retrieving OpenOffice documents instead of bibliographic records from appropriate repositories.

2.4 SRW

It is the "Search/Retrieve Web Service" protocol. It objective is to integrate access to several networked resources, and to support interoperability between distributed databases, by enabling a common utilization framework. SRW is a web-service-based protocol whose underpinnings are made by getting together more than 20 years’ experience from the collective implementers of the Z39.50 Information Retrieval protocol with current developments in the technological arena of the web.

SRW features both SOAP and URL-based access mechanisms to allow a wide variety of possible clients ranging from Microsoft's .Net initiative to simple JavaScript and XSLT transformations. It influences the CQL query language which offers a powerful yet intuitive means to formulate searches. The protocol mandates the usage of open and industry-supported standards XML and XML Schema, and where appropriate, XPath and SOAP. SRW has been developed by an international team, aiming to minimize cross-language pitfalls and other potential internationalization problems.

The SRW Initiative, building on Z39.50 along with web technologies, identifies the significance of Z39.50 (as currently defined and deployed) for business communication, and concentrates on getting information to the user. SRW provides semantics for searching databases containing metadata and objects, both text and non-text. Developing on Z39.50 semantics enables the creation of gateways to existing Z39.50 systems while decreasing the barriers to new information providers, to make their resources available via a standard search and retrieve service. SRW defines a web service merging several Z39.50 features, most notably, the Search, Present, and Sort Services.

2.5 SRU

It stands for Search/Retrieve via URL - is a standard XML-based protocol for search queries, utilizing CQL - Contextual Query Language - a standard syntax for representing queries. The main difference between SRU and SRW is that SRU uses HTTP as the transport mechanism. This indicates that the query by itself is communicated as an URL and that XML is returned as if it were a web page (it is to be noted that POST, an alternative for using the HTPP transport mechanism, is not acceptable in SRU). SRW is SOAP based, meaning that both the query and the result are XML streams. The advantage of this is that a variety of transport mechanisms can be used, including for instance e-mail.

2.6 Conclusion

The goal in evolving and using technical standards in information services, references, libraries, and publishing is to make information systems easy for usage and less expensive to operate. The acceptance of these standards by experts who develop and sell products globally makes these standards to be widely accepted at an international level. Clients benefit from the implementation of standards as an assurance that products and services from varied sources meet a certain level of quality.

References

3. 0 Global Digital Library

Due to the extensive worldwide use of the open systems like Internet and World Wide Web (WWW), it has made possible for us to experience the reality of all types of "virtual libraries" in the cyberspace. While identifying the significance of several other related problems and issues with the information framework, which are more global in nature, it is the duty of the librarians to actively participate and work together in minimizing the problems and issues. Global Digital Library (GDL) prototype has the potential to join several national libraries and some major libraries, archives, museums, and information organizations together. Undoubtedly, there exists a call for global cooperation in "digital" knowledge building and sharing in this digital information age.


3.1 Paradigm Shift towards a Global Learned Society

With the advent of information and communication technology achievements there is an ever-increasing demand to have access to global information inorder to have a much better and clearer picture on the world in which we are living. We are more interested to know about our surrounding, our history, our cultures, our economy, our science and technology, etc... As information, has turned out to be the key to productivity and with the advancement in technology, societal change and economic progress,  libraries are facing new challenges. It demands our libraries to transcend traditional methods of providing information access within the confines of library's physical structures to providing access to services and global information resources to people at home, in school, at work, or any place so desired by them [Chen. 1994].

3.2 Reality and Challenges

In the most recent few years, there is a rapid increase in the usage of the Internet for uploading digital documents on the World Wide Web (WWW) for both commercial and noncommercial purposes has been explosive. In these days it is a matter of a few seconds for us to talk, write, confer with, or send textual, audio and visual information to anyone else in any part of the world, the scene for information seeking and for library usage and information services provision and delivery has changed dramatically. Now the reality of digital library have come into picture and information is required to be shared over the cyber space. 

3.3 Moving Toward Universal Information Access

As computing and telecommunications have expanded and have come together, we have moved closer toward universal information access. Now that we can talk, write, confer with, or send textual, audio and visual information to anyone else in any part of the world, the concept of the digital "Global Library" is both conceptually sound and technological feasible now. To have such  kind of universal library on the open system with access to global information resources which include the collections of the world's greatest libraries as well as others resources is still a challenge.

3.4 Obstacles to Universal Access

They exists a lot of problems and issues related to the information infrastructure. There are many barriers and hurdles that the information society faces during global information exchange.They are :
  1. Several legal issues may arise. These issues are with respect to copyright, intellectual property, privacy and confidentiality, personal and business equity, and security.
  2. Cultural difference reflected during information communication.
  3. Existence of generational gaps;
  4. Complexity of information infrastructure in global and national level;
  5. Adequate and effective inventory of available information resources that constitutes knowledge of information;
  6. Ability to locate and retrieve quality and relevant information;
  7. Beginning of complex issues related to "undesirable" "indecent" information; etc.
The outcome is that in spite of these problems and unsolved issues, the technologies will be soon available to enable us to link all the global information together to form "The Global Library" for multimedia information delivery. 

3.5 The Global Digital Library: Infrastructure, Knowledge base Contents, and Global Coalition

The modern roles of these libraries have to go ahead of the job of being a store house. Each of the library have to be dynamic and aggressive information provider of both its country's enormously rich information resources, as well as an effective node of global information network, which can provide access to all needed global information. Each contributes effectively toward the eventual realization of "The Global Library", in which national and research libraries in the world can be linked together as nodes of the worldwide information network. The GDL prototype system has confirmed that provision of universal information access to a large number of world's digital libraries is technologically feasible. Although there exists several hurdles that  are to be crossed in order to have a real interoperable and functional GDL. These barriers, have already been discussed earlier, include standards, intellectual properties, copyright language barriers, security, funding, etc. They cannot not be overlooked as they form the central issues for any international cooperation.

The Global Digital Library prototype suggests that there should be development in  infrastructure . It is also necessary to figure out global coalition of main libraries and information organizations, so that appropriate, substantive and higher-level collaboration on the development of the GDL is possible. With this coalition, libraries will be able to find ways collectively to address the following issues:

  1. How information resources can be made available in digital form so that they can be shared globally crossing every barrier?
  2. What are the funding agencies actively participating in promotion of GDL?
  3. How global coalition effort can help to maximize the limited resources?
  4. Which section of the information resources can be made public without copyright restriction?
  5. How to share information if it is copyright protected?
  6. How the user community will be able to contribute and make use of the digital resources?

Although there exist powerful information superhighway, we are far from having our valuable information resources available in digital form so that they can be linked together by utilizing the available technologies. The first requirement is availability of digital format of resources on the cyberspace. Digital libraries are only possible if information sources are available in digital format. If the information sources are accessible in digital form, they can be used, distributed, and transmitted easily to end-users over the global information network, such as Internet.

Furthermore, the main concern to the end-users will have to be the authority and quality of content in digital libraries. Let us take the example of the national libraries, the GDL prototype experimentation has exposed vividly that although there are considerable numbers of national libraries have made their homepages available for the users, most of them do not have knowledge-based contents. Instead, they are mainly directional and informational in nature. While these are important, they are only "pointers", which has to be linked further to the information itself, so that people can begin to learn and to engage further. Thus, there is a desperate need for quality content building within the digital library community.

3.6 Conclusion

In the existing networked scenario, the knowledge world is going from a paper culture to an electronic one, thus a paperless society is coming up, in this aspect libraries will be deeply affected. In other words, printed information sources, such as books, journals, manuscripts, bibliographies and archival materials, will not be enough. Digital information sources become essential, so libraries have come up with limitless digital bookshelves.

As we move further with the technological advancements this digital visual information age, the need for each country to develop its national and global information infrastructure and to digitize their information resources will increase sharply. In working toward this global information access, the principles outlined in the Alexandria Declaration of Principles will continue to serve as an effective checklist for successful development.

The Web has become a digital landmark, as important as the Internet itself. The Web can generate an innovative and more available sub-world, like a window-shopping or market square experience. The Internet can bring the digital library available not only to the local but also to the global community, and can facilitate one and all to be interactive and engage oneself in the process of sharing and creating new knowledge based on the information obtained. Being at these crossroads, in addition to assumption on the libraries in the next millennium, one should make it a point to develop in this seemingly exciting networked environment, a vision for our global library's future, and define its role in facing a new frontier. It is important for us to visualize that not only all types of libraries in our country would be connected to the super-network, but globally all libraries would be part of the network as well. In anticipating the growing demand to use the Internet and the Web for more suitable purposes, communicating, learning, experiencing, the GDL prototype has been created to provide users with a window-shopping experience on the world's rich information resources. But thus far, going beyond the window dressing, there no been few deliverable "products" or "contents." We must have the real thing so that the library Net and Web users will not end up frustrated! This is a real challenge! 

References

  • Alexandria Declaration of Principles [prepared by Robert M. Hayes and Ching-chih Chen] (1995). Microcomputers for Information Management, 12 (1-2), 3-8. Also InPlanning Global Information Infrastructure, ed. by Ching-chih Chen. Norwood, NJ: Ablex. pp. 1-6.
  • Bobinha, José Luis B. and Delgado, José Carlos M. (1996, November). Internet and the New Library. In Proceedings of NIT '96: The 9th International Conference on New Information Technology, November 11-14, 1996, Pretoria, South Africa. Ed. by Ching-chih Chen. Newton, MA: MicroUse Information. pp. 1-12.
  • Chen, Ching-chih. (1986, December). Libraries in the information age: Where are the microcomputer and laser optical disc technologies taking us? Microcomputers for Information Management, 3 (4), 253-266.
  • Chen, Ching-chih. (1990, December). The challenge to library and information professionals in the visual information age. Microcomputers for Information Management, 7 (4), 255-272.
  • Chen, Ching-chih. (1993, November). Thriving in the digital communications environment: PROJECT EMPEROR-I's experience -- From print to multimedia, analog to digital. In Proceedings of NIT '93: The Sixth International Conference on New Information Technology (PP. 59-68). West Newton: MicroUse Information. pp. 59-68.
  • Chen, Ching-chih. (1994, September). Information superhighway and the digital global library: Realities and challenges. Microcomputers for Information Management, 11 (3), 143-156.
  • Chen, Ching-chih, ed. (1995a). Planning Global Information Infrastructure. Norwood, NJ: Ablex. 518 p.
  • Chen, Ching-chih. (1995b). Digital global cultural and heritage information network: Personal viewpoint. In Planning Global Information Infrastructure, (pp. 181-185). Ching-chih Chen (Ed.). Norwood, NJ: Ablex. pp. 167-180,
  • Chen, Ching-chih. (1996). .Global digital library initiative: Prototype development and needs. Microcomputers for Information Management, 13 (2), 133-164.
  • Lynch & Garcia-Molina. (1996). Interoperability, scaling, and the digital libraries research agenda. Microcomputers for Information Management, 13 (2), 85-132.
  • Negrosponte, Nicholas. (1996, May). Caught browsing again. Wired, p. 200.
  • Ziegler, Bart. (1994, May 18). Building the highway: New obstacles, new solutions.The Wall Street Journal, pp. B1, B3.

- Intelligent Information Retrieval

Sparck Jones defines an intelligent information retrieval as a computer system with the capability to infer by using its previous knowledge to establish a connection between its user’s requirement and a set of candidate document. It is a system that can carry out intelligent retrieval. The realization to use knowledge within the retrieval system has led researchers to look at artificial intelligent system that also aims to incorporate and use knowledge, and in one class of these particular is an expert system.

Expert System

As briefly defined by Wikipedia in artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning about knowledge, represented primarily as IF-THEN rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of AI software.

An expert system is divided into two sub-systems: the inference engine and the knowledge base. The knowledge base represents facts about the world and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging capabilities. An expert system is an example of a knowledge-based system. Expert systems were the first commercial systems to use a knowledge-based architecture. A knowledge-based system is essentially composed of two sub-systems: the knowledge base and the inference engine. The knowledge base represents facts about the world. In early expert systems such as Mycin and Dendral these facts were represented primarily as flat assertions about variables. In later expert systems developed with commercial shells the knowledge base took on more structure and utilized concepts from object-oriented programming. The world was represented as classes, subclasses, and instances and assertions were replaced by values of object instances. The rules worked by querying and asserting values of the objects.
The inference engine is an automated reasoning system that evaluates the current state of the knowledge-base, applies relevant rules, and then asserts new knowledge into the knowledge base. The inference engine may also include capabilities for explanation, so that it can explain to a user the chain of reasoning used to arrive at a particular conclusion by tracing back over the firing of rules that resulted in the assertion.
As Expert Systems evolved many new techniques were incorporated into various types of inference engines. Some of the most important of these were:
  • Truth Maintenance: Truth maintenance systems record the dependencies in a knowledge-base so that when facts are altered dependent knowledge can be altered accordingly. For example, if the system learns that Socrates is no longer known to be a man it will revoke the assertion that Socrates is mortal.
  • Hypothetical Reasoning: In hypothetical reasoning, the knowledge base can be divided up into many possible views, aka worlds. This allows the inference engine to explore multiple possibilities in parallel. In this simple example, the system may want to explore the consequences of both assertions, what will be true if Socrates is a Man and what will be true if he is not?
  • Fuzzy Logic: One of the first extensions of simply using rules to represent knowledge was also to associate a probability with each rule. So, not to assert that Socrates is mortal but to assert Socrates may be mortal with some probability value. Simple probabilities were extended in some systems with sophisticated mechanisms for uncertain reasoning and combination of probabilities.
  • Ontology Classification: With the addition of object classes to the knowledge base a new type of reasoning was possible. Rather than reason simply about the values of the objects the system could also reason about the structure of the objects as well. In this simple example Man can represent an object class and R1 can be redefined as a rule that defines the class of all men. These types of special purpose inference engines are known as classifiers. Although they were not highly used in expert systems classifiers are very powerful for unstructured volatile domains and are a key technology for the Internet and the emerging Semantic Web

Expert System for Library Professionals


ES in Cataloguing
According  to Davies (1986), cataloguing is a possible domain of application for ES because it has certain characteristics such as; there are recognized experts, the experts are demonstrably better than amateurs, the task takes an expert a few minutes to a few hours, the task is primarily cognitive, and the skill is routinely taught to new comer. The 1980s saw a huge increase in activity along with the popularity of developing ES and knowledge-based systems in the sub-domain of cataloguing. Three streams of researchers emerged; those interested in developing systems to give advice on the application of rules (advisory programs), those concerned with record creation, and those more absorbed with automating the whole process (Morris, 1992).

ES for Automated cataloguing 

Interest in this area started with Ann M. Sandberg-Fox in 1972 who conducted a pioneer study as her doctoral research at the University of Illinois at Urbana-Champaign. The study addressed the conceptual issues on determining whether the human intellectual process of selecting main entry could be simulated by computers. It was only a decade later, in the late 1980s when interest in this area picked up again. One research in Germany produced a system called AUTOCAT (Endres-Nigge-meyer and Knorz, 1987), which attempted to generate bibliographic records of perio-dical literature in the physical sciences that were available in machine-readable form. Another significant work was undertaken by Weibel, Oskins and Vizine-Goetz (1989). They built a prototype rule-based ES at OCLC known as “the OCLC Automated Title Page Cataloguing Project” to auto-mate descriptive cataloguing from title pages. The system used OCR techniques and their study reports a success rate of 75% in identifying and interpreting bibliographic data on title pages using visual and linguistic characteristics codified in only 16 rules.
Elaine Svenonius, like Weibel, was concerned with the interpretation of machine-readable title pages of English language monographs. Her research, however, focused on the problem of automatically deriving name access points, particularly personal names and corporate names. In their study, Molto and Svenonius (1991) propose an algorithm for identifying corporate names by creating a machine-readable corporate name authority file, and matching character string sequences on the title pages with those in authority file. In formulating an algorithm for identifying personal names, they effectively use the initial element cues (i.e., first name, initials, and titles) and post-name markers (such as punctuation or spacing). The results of their study show high success rates of more than 84% in identifying both kinds of names.
The QUALCAT (Quality Control in Cataloguing) project at the University of Bradford attempted to apply automated quality control to databases of bibliographic records. Sets of records, putative duplicate that appeared to be for the same monograph were grouped together and an ES used to determine whether they were in fact duplicates, and if so which were the best records (Ridley, 1992; Ayres, 1994).

 

ES in Classification

Classification is a difficult function to capture in an ES. While there are guides to determine classification numbers and subject headings, there are no strict rules available, and the relationships between objects and classes are often ambiguous. Among some of the systems that have been developed on item, patent and book classification are by Sharif (1988); Valkonen and Nykanen (1991); Cosgrove and Weimann (1992); Gopinath and Prasad (1994).

In 1986, Paul Burton conducted an exploratory research at the University of Strathclyde in the United Kingdom, aiming to assess the merits of various ways of know-ledge representation and to assess the suitability of ES in classification. The re-search resulted in a prototype ES that was able to advise a Dewey classification number based on the information provided by the user, to justify the reasoning and to explain why the ES asked certain questions. Following the research, OCLC developed Cataloguer’s Assistant, and tested it at the Carnegie-Mellon University to reclassify the mathematics and computer science collection. The experiment looks closely at research questions such as know-ledge representation, the navigation tools, the search capabilities and the various ways of displaying data.

ES  IN  DOCUMENT DELIVERY
There were only two references found pertaining to document delivery. The first reports a system developed by Brown (1993b) and the other by Abate (1995). Brown described the use of ES technology at Raytheon Company’s equipment division to co-ordinate requests for specifications and standards documents with purchases made through the acquisitions unit. She further discussed the development of a knowledge base using the shell VP-Expert. Abate reported on an ES which was developed for document delivery decision ma-king in the library of a law firm using the ES shell, VP-Expert.

ES IN ABSTRACTING
Most of the research in abstracting has been concerned with abstracting papers from learned journals and conference proceedings. The first reported experiment on automatic abstracting was in 1958 by Luhn. Since then, other systems have been developed by DeJong (1983), Lebowitz (1986), Husk (1988), and many more. DeJong (1982) produced the FRUMP system that analyses newspaper articles using frame-based techniques. The articles are scanned and data are automatically fed into various slots within frames. Scripts are then used to generate summaries of the information held in the relevant frames. Another system, which reports on corporate mergers and acquisitions, was developed by Rau, Jacobs and Zernik (1989). Known as SCISOR, this system produced a detailed linguistic analysis of a text from which a semantic graph is constructed.
A very few expert systems are reported to be operating in LIS, such as: AquaRef, Reference Expert and ARDIS.A few systems have progressed to commercial availability but have later failed and been withdrawn from the market , like Tome Selector and Tome Searcher.

In terms of limitations, the following points should be noted: 
1. Initially, due to smaller Object Database, the results will be lesser efficient (but still more efficient than current technology). This problem can be overcome by having a large database before the start of the service.
2. The form fields to be filled by the user may increase, if precise results are desired.
3. The cost of implementation will be very high.

Despite these limitations, this Intelligent Information Retriever is a major enhancement over the current search engines and is a serious step forward in the direction of incorporating Artificial Intelligence in searching for more efficient results.


References

Davies, Roy. 1986. Cataloguing as a domain for an expert system. In: Davies, R., (ed.). Intelligent Information Systems: Progress and Prospects. Chichester, England : Ellis Horwood, pp. 54 - 77.
Morris, Anne. (ed.) 1992. The Application of Expert Systems in Libraries and Infor-mation Centres. London : Bowker-Saur.
Rau, L. F.; P.S. Jacobs and U. Zernik. 1989. SCISSOR : information extraction and text summarization using linguistic knowledge acquisition. Information Processing and Management, Vol.25 no.4: 419 - 428.
Richardson, John. 1989. Toward an expert system for reference service: a research agenda for the 1990. College and Re-search Libraries, Vol.50 no.2: 230 - 248.
Endres-Niggemeyer, B. and G. Knorz. 1987. AUTOCAT: knowledge-based descriptive cataloguing of articles published in scien-tific journals. Second International GI Congress 1987. Knowledge Based Sys-tems. Munich, October 20 – 21, 1987.
Weibel, Stuart; M. Oskins and Diane Vizine-Goetz. 1989. Automatic title page catalo-guing: a feasibility study. Information Pro-cessing and Management, Vol.25 no.2: 187-203.
Molto, Mavis and Elaine Svenonius. 1991. Automatic recognition of title page names.Information Processing and Management, Vol.27: 83 – 95
Ridley, M. J. 1992. An expert system for qua-lity control and duplicate detection in bi-bliographic databases. Program, Vol.26 no.1: 1 - 18.
Sharif, Carolyn A. Y. 1988. Developing an expert system for classification of books using micro-based expert systems shells. British Library Research Paper 32.
Valkonen, Pekka and Olli Nykanen. 1991. An expert system for patent classification. World Patent Information, Vol.13 no.3: 143-148.
Gopinath, M. A. and A.R.D. Prasad. 1994. A knowledge representation model for Ana-lytico-Synthetic classification. In: Know-ledge Organization and Quality Manage-ment. Proceedings of the 3rd ISKO Con-ference; 20-24 June 1994; Copenhagen, Denmark. pp. 320 – 327
Cosgrove, S. J. and J. M. Weimann. 1992. Ex-pert system technology applied to item classification. Library Hi Tech, Vol.10: 34.
Brown, Lynne C. Branche. 1993a. An expert system for predicting approval plan re-ceipts.Library Acquisitions: Practice & Theory, Vol.17: 156-162.
Abate, A. K. 1995. Document delivery expert. Journal of Interlibrary Loan, Document Delivery and Information Supply, Vol.6, no.1: 17 - 37.
DeJong, Gerald. 1983. Artificial intelligence implications for information retrieval. In:Proceedings of the Association for Com-puting Machinery, Special Interest Group on Information Retrieval (ACM SIGIR) 6th Annual International Conference, 6-8 June 1983, Bethesda, MD, pp. 10 - 17.
Lebowitz, M. 1986. An experiment in intel-ligent information systems – RESEAR-CHER. In: Davies, R., (ed.). Intelligent Information Systems: Progress and Pros-pects. Chichester, England : Ellis Hor-wood. pp. 127 - 149.
Husk, G. D. 1988. Techniques for automatic abstraction of technical documents using reference resolution and self-inducing phrases. Master’s dissertation. University of Lancaster.
Rau, L. F.; P.S. Jacobs and U. Zernik. 1989. SCISSOR : information extraction and text summarization using linguistic knowledge acquisition. Information Processing and Management, Vol.25 no.4: 419 - 428.

Intelligent hypertext and hypermedia systems

Hypertext is text displayed on a computer display or other electronic device with references (hyperlinks) to other text which the reader can immediately access, or where text can be revealed progressively at multiple levels of detail (also called StretchText). The hypertext pages are interconnected by hyperlinks, typically activated by a mouse click, keypress sequence or by touching the screen. Apart from text, hypertext is sometimes used to describe tables, images and other presentational content forms with hyperlinks. Hypertext is the underlying concept defining the structure of the World Wide Web, with pages often written in the Hypertext Markup Language (HTML). It enables an easy-to-use and flexible connection and sharing of information over the Internet.

Hypertext documents can either be static (prepared and stored in advance) or dynamic (continually changing in response to user input, such as dynamic web pages). Static hypertext can be used to cross-reference collections of data in documents, software applications, or books on CDs. A well-constructed system can also incorporate other user-interface conventions, such as menus and command lines. Links used in a hypertext document usually replace the current piece of hypertext with the destination document. A less known and used feature is StretchText, which expands or contracts the content in place giving more control to the reader in determining the level of detail of the displayed document. Hypertext can develop very complex and dynamic systems of linking and cross-referencing. The most famous implementation of hypertext is the World Wide Web, first deployed in 1992.

Components of Hypertext
Hypertext retrieval systems are the products of an emerging technology that specifies an alternative approach to the retrieval of information from machine-readable full-text documents. It provides the facility for any relationship existing between two document representation, or nodes, to be represented by a link. Searches are able to retrieve individual notes successively by activating the links between them. Such links may be created by manual or automatic means.
A document retrieval system may be identified as a hypertext system if its components include the following:

A structural component, consisting of a database of document representation in which the relationships between documents are explicitly represented, such that the document representations and relationships between them together form a structure
A functional component consisting of a retrieval mechanism of a type that is,
-                      navigational, i.e. it allows users to make particular decisions at each stage of the retrieval process as to the objects, that should be retrieved next.
-                      browsing-based, i.e. it allows users to search for information without their having to specify a definite target.
Main pillars of hypertext research

The maturation of software and hardware and the widespread availability of personal and mainframe computers have stimulated great interest in the design of electronic information systems and made possible search strategies impractical in manual systems. Research related to human performance with hypertext and other electronic information systems integrates methods and ideas from information retrieval, interface design, and cognitive science.

Information retrieval: Research related to on-line searching has focused on systems that aid professional intermediaries in finding a small number of “hits” in a large collection of records (library card catalogs, scientific journal abstracts, UP1 reports, etc.). The emphasis has been on
designing systems that aid or replace professional intermediaries (see Marcus’ for an example of an actual system). Professional on-line searchers primarily clarify information requests and retrieve relevant information for end users. They carefully plan in advance, consult
thesauri, and combine terms in systematic and precise steps by applying logical connectives (AND, OR, NOT) and by adjusting proximity limits (the range of words within which query terms must cooccur) and scoping limits (the range of documents over which search takes place).
Unless they are themselves a part of a research team effort, they act as communication channels, locating and transferring information to end users who interpret and apply it.
The primary goal of on-line searchers is to retrieve and communicate information efficiently-their analysis usually focuses on the facets of a request for information, not on the problem that motivated the question or the possible application of the answer. Search intermediaries rarely
browse informally, because focusing on the goal yields efficient and cost-effective performance. Their analytical strategies include much preplanning, application of Boolean connectives, and systematic iterations of the querying and refinement process. End users, on the other hand, often
browse despite accruing costs because they have long-term commitments to an area of research and may later benefit from extraneous information in that area. In other words, end users rationalize inefficient information-seeking strategies by hoping that incidental learning will have a beneficial cumulative effect. Browsing is an exploratory, information-seeking strategy that depends on serendipity. It is especially appropriate for ill-defined problems and for exploring new task domains. Today’s electronic retrieval systems were designed for use by professional
intermediaries, or to emulate their performance. These systems focus on coding, indexing, and cross-referencing (organization for retrieval) rather than on meaning, readability, and assimilation (organization for understanding). Systems meant for end users must take into account these differences and support appropriate information-seeking strategies. Hypertext systems differ from existing on-line retrieval systems in that they encourage informal, personalized, content-oriented information-seeking strategies. Hypertext system users can actually apply information during the retrieval process by noting context, and during browsing by saving, linking, or transferring text or images. Of particular interest in our research is the support of end users through flexible and powerful human-computer interfaces that balance end-user browsing strategies with efficient analytical strategies like those used by professional intermediaries.
Hypermedia is used as a logical extension of the term hypertext in which graphics, audio, video, plain text and hyperlinks intertwine to create a generally non-linear medium of information. This contrasts with the broader term multimedia, which may be used to describe non-interactive linear presentations as well as hypermedia. It is also related to the field of electronic literature. The term was first used in a 1965 article by Ted Nelson.

The World Wide Web is a classic example of hypermedia, whereas a non-interactive cinema presentation is an example of standard multimedia due to the absence of hyperlinks.

The first hypermedia work was, arguably, the Aspen Movie Map. Atkinson's HyperCard popularized hypermedia writing, while a variety of literary hypertext and hypertext works, fiction and nonfiction, demonstrated the promise of links. Most modern hypermedia is delivered via electronic pages from a variety of systems including media players, web browsers, and stand-alone applications (i.e., software that does not require network access). Audio hypermedia is emerging with voice command devices and voice browsing. 

Hypermedia development tools

Hypermedia may be developed a number of ways. Any programming tool can be used to write programs that link data from internal variables and nodes for external data files. Multimedia development software such as Adobe Flash, Adobe Director, Macromedia Authorware, and MatchWare Mediator may be used to create stand-alone hypermedia applications, with emphasis on entertainment content. Some database software such as Visual FoxPro and FileMaker Developer may be used to develop stand-alone hypermedia applications, with emphasis on educational and business content management.
Hypermedia applications may be developed on embedded devices for the mobile and the digital signage industries using the Scalable Vector Graphics (SVG) specification from W3C (World Wide Web Consortium). Software applications such as Ikivo Animator and Inkscape simplify the development of hypermedia content based on SVG. Embedded devices such as iPhone natively support SVG specifications and may be used to create mobile and distributed hypermedia applications.

Hyperlinks may also be added to data files using most business software via the limited scripting and hyperlinking features built in. Documentation software such as the Microsoft Office Suite and LibreOffice allow for hypertext links to other content within the same file, other external files, and URL links to files on external file servers. For more emphasis on graphics and page layout, hyperlinks may be added using most modern desktop publishing tools. This includes presentation programs, such as Microsoft Powerpoint and LibreOffice Impress, add-ons to print layout programs such as Quark Immedia, and tools to include hyperlinks in PDF documents such as Adobe InDesign for creating and Adobe Acrobat for editing. Hyper Publish is a tool specifically designed and optimized for hypermedia and hypertext management. Any HTML editor may be used to build HTML files, accessible by any web browser. CD/DVD authoring tools such as DVD Studio Pro may be used to hyperlink the content of DVDs for DVD players or web links when the disc is played on a personal computer connected to the internet.



Hypermedia Linking
Hypermedia systems - indeed information in general - contains various types of relationships between elements of information. Hypermedia allows these relationships to be instantiated as links which connect the various information elements, so that these links can be used to navigate within the information space. We can develop different taxonomies of links, in order to discuss and analyse how they are best utilised. One possible taxonomy is based on the mechanics of the links. We can look at the number of sources and destinations for links (single-source single-destination, multiple-source single-destination, etc.) the directionality of links (unidirectional, bidirectional), and the anchoring mechanism (generic links, dynamic links, etc.). A more useful link taxonomy is based on the type of information relationships being represented. In particular we can divide relationships (and hence links) into those based on the organisation of the information space (structural links) and those related to the content of the information space (associative and referential links). Let us take a brief look at these links in more detail.

Structural Links: The information contained within the hypermedia application is typically organised in some suitable fashion. This organisation is typically represented using structural links. We can group structural links together to create different types of application structures. If we look, for example, at a typical book, then this has both a linear structure (from the beginning of the book linearly to the end of the book) and usually a hierarchical structure (the book contains chapters, the chapters contain sections, the sections contain ). Typically in a hypermedia application we try to create and utilise appropriate structures. Example structures are discussed in more detail in Section 3.2. These structures are important in that they provide a form for the information space, and hence allow the user to development an understanding of the scale of the information space, and their location within this space. This is very important in helping the user navigate within the information space. Structural relationships do not however imply any semantic relationship between linked information. For example, a chapter in a book which follows another is structurally related, but may not contain any directly related information. This is the role of associative links.

Associative Links: An associative link is an instantiation of a semantic relationship between information elements. In other words, completely independently of the specific structure of the information, we have links based on the meaning of different information components. The most common example which most people would be familiar with is cross-referencing within books ("for more information on X refer to Y"). It is these relationships - or rather the links which are a representation of the relationships - which provide the essence of hypermedia, and in many respects can be considered to be the defining characteristic.

Referential Links: A third type of link which is often defined (and is related to associative links) is a referential link. Rather than representing an association between two related concepts, a referential link provides a link between an item of information and an elaboration or explanation of that information. A simple example would be a link from a word to a definition of that word. One simple way of conceptualising the difference between associative and referential links is that the items linked by an associative link can exist independently, but are conceptually related. However the item at one end of a referential link exists because of the existence of the other item.

It is important to note that many hypermedia systems (most noticeably the WWW) do not provide a mechanism for differentiating between the different link types. As a result the same mechanism is used to represent these different types of links. A common result is that users are not readily able to differentiate between the structure of the information space and the content of the information space - resulting in difficulty in navigation.

As a final point on linking, it is useful to note that we could take a holistic perspective and recognise that every item of information is related to every other item of information within some context. Hypermedia however is about sifting through this almost-infinite web of interconnections, to identify and utilise those that help with the goal of information access and manipulation within some context. As was originally recognised by Vannevar Bush over 50 years ago, these associative relationships are analogous to the way in which our mind appears to achieve such a high level of efficacy in information retrieval.

References
1. T.H. Nelson, Literary Machines, Swarthmore, Penn., 1981. Available from Nelson.
2. D. Engelbart, “A Conceptual Framework for Augmentation of Man’s Intellect,” in Vistas in Information Handling, Vol. I, Spartan Books, Washington, D.C., 1963,
3. J. Conklin, “Hypertext: An Introduction and Survey,” Computer, Sept. 1987, pp.
4. N. Yankelovich, N. Meyrowitz, and A. van Dam, “Reading and Writing the Electronic Book,” Computer, Oct. 1985, pp. 15-30.
5. F. Halasz, T. Moran, and R. Trigg, “Notecards in a Nutshell,” CHI+ GIConf. Proc.: Human Factors in Computing Systems and Graphics Interfaces, 1987, pp.
6. D. Goodman, “The Two Faces of Hyper- Card,” Macworld, Oct. 1987, pp. 122-129.
7. R. Marcus, “An Experimental Comparison of the Effectiveness of Computers and Humans as Search Intermediaries,” J. Am. Society for Information Science, Vol. 34,
8. B. Shneiderman, Designing the User Interface: Strategies for Effective Human Computer Interaction, Addison-Wesley, Reading, Mass., 1987.
9. G. Gentner and A. Stevens, eds., Mental Models, Lawrence Erlbaum Associates, Hillsdale, N.J., 1983.
10. C.L. Borgman, “The User’s Mental Model of an Information Retrieval System: An Experiment on a Prototype On-line Catalog,” Int’l .IM. an-Machine Studies, Vol.
11. W. Zoellick, “CD-ROM Software Development,” Byte,Vol. 1 1 , 1 9 8 6 , ~1~7.7 -188.
12. B. Shneiderman, “User Interface Design and Evaluation for an Electronic Encyclopedia,” in Cognitive Engineering in the Design of Human-Computer Interaction and Expert Systems, G. Salvendy, ed., Elsevier, Amsterdam, 1987, pp. 207-223.
13. G. Marchionini, “Information-Seeking Strategies of Novices Using a Full-Text Electronic Encyclopedia,” J. Am. Society for Information Science, in press.
14. Anderson J.R., Corbett A.T., Koedinger K., & Pelletier R., (1995). Cognitive tutors: Lessons learned. The Journal of Learning Sciences, 4,167-207.
15. Brusilovsky P., 1998,"Adaptive Educational Systems on the World-Wide-Web: A Review of Available Technologies". In: Proceedings of Workshop "WWWBased Tutoring" at 4thInternational Conference on Intelligent Tutoring Systems (ITS'98), San Antonio, TX, August 1998.
16. de La Passardiere B., Dufresne A., 1992, "Adaptive navigational tools for educational Hypermedia", I. Tomek (Ed), Computer Assisted Learning (pp. 555- 567), Spinger-Verlag.
17. Eklund J, Brusilovsky P., Schwarz E., 1997, "Adaptive Textbooks on the WWW, in: Proceedings of AUSWEB97 - The Third Australian Conference on the World Wide Web, 186 192, Queensland, Australia.
18. Stern M., Woolf B.P., Kurose J.F., 1997,"Intelligence on the Web?", Artificial
Intelligence in Education, IOS Press, 490-497.

- User interface

The need for effective information retrieval systems becomes increasingly important as computer-based information repositories grow larger and more diverse. In this tutorial, we present the key issues involved in the use and design of effective interfaces to information retrieval systems. The process of satisfying information needs is analyzed as a problem solving activity in which users learn and refine their needs as they interact with a repository. Current systems are analyzed in terms of key interface and interaction techniques such as querying, browsing, and relevance feedback. We discuss the impact of information seeking strategies on the search process and what is needed to more effectively support the search process.  Retrieval system evaluation techniques is discussed in terms of its implications for users. We close by outlining some user-centered design strategies for retrieval systems.

The field of information retrieval can be divided along the lines of its system-based and user-based concerns. While the system-based view is concerned with efficient search techniques to match query and document representations, the user-based view must account for the cognitive state of the searcher and the problem solving context. People are drawn to an information retrieval system because they perceive that they lack some knowledge to solve a problem or perform a task. This creates an "anomalous state of knowledge"[1] or "situation of irresolution" [6] in which information seekers must find something they know little or nothing about.

Information retrieval systems must not only provide efficient retrieval, but must also support the user in describing a problem that s/he does not understand well. The process is not only one of providing a good query language, but also one of supporting an iterative dialogue model. As users query and browse the repository, they learn more about the problem and potential solutions and therefore refine their conceptualization of the problem. The information being sought differs from that being sought at the beginning of the session. The user is engaged in a problem solving session in which the problem to be solved, that of finding relevant information, evolves and is refined through the process of seeing the results of intermediate queries. 

The Vocabulary Problem

Even in cases where the information is well-known, a vocabulary problem still exists. Users may know what they are looking for, but lack the knowledge needed to articulate the problem in terms and abstractions used by the retrieval system. An inherent problem is that people use a surprisingly diverse set of terms to refer to the same object, such that the probability for choosing the same term for a familiar object is less than 15 percent [3]. This problem is exacerbated by the fact that information repositories are often indexed by experts and by the inherent properties of the objects. Expert indexing causes problems because less knowledgeable users, who define the majority of people experiencing anomalous states of knowledge, are less likely to understand the terminology used by experts. Indexing by inherent properties causes problems because most information seeking is engaged in some problem solving context. People are looking for information that is used for something and are therefor more concerned with how an object is used, not its inherent properties [4]. 

Interfaces For Retrieval Systems

Current information retrieval systems have addressed these inherent properties of information seeking and indexing in a variety of ways. Browsing has been employed to facilitate the iterative and ill-defined nature of information seeking, but can lead to a loss of direction and overly narrow searches. Queries provide a means to direct the search, but often rely on the user understanding a complex query language and proper vocabulary to be effective. An integration of these strategies is a promising approach that can solve some of these problems [5]. Information visualization techniques such as the Perspective Wall at Xerox PARC [2] can be used in interface design to improve both browsing and querying.

Good information retrieval system design combines a combination of support for information seeking strategies, such as browsing and direct querying, in an interface that provides effective cues to the location, use, and characteristics of the retrieved information. Feedback techniques are also crucial to support the iterative refinement of information needs. 

Evaluation Techniques

Traditional retrieval system evaluation relies on the measures of recall (the proportion of relevant items in the entire repository which have been retrieved) and precision (the proportion of retrieved items which are relevant) for assessment of system effectiveness. There are significant problems with these measures of effectiveness, and the criterion on which they are based, relevance. These include such issues as:

  • who makes the relevance judgments, and how are these judgments related to the user's context and the use of the items;
  • lack of knowledge of the total number of relevant items in the repository for any given information problem;
  • how to evaluate these measures over the course of an interaction; and,
  • the validity of the measures themselves as indicators of the effectiveness of the information interaction.

This suggests the need for new measures of retrieval effectiveness in interactive retrieval systems.

Furthermore, new evaluation techniques are needed that account not only for the accuracy of a retrieval system, but also its interactive abilities and ease of use. A system that measures poorly in recall and precision, but provides good browsing and iterative querying facilities may be more successful overall in responding to a person's information problem than a system which is "more effective" in terms of recall and precision. 

Design Strategies

Guidelines for the design of information retrieval systems must address not only issues of look-and-feel, but also of effective interaction. Dialog models based on relevance feedback and query reformulation explicitly address the ill-defined nature of information seeking by allowing users to learn from the repository and iteratively refine the information need. Systems need to support a number of interaction styles, such as querying and browsing, to accommodate the different kinds of search strategies users may need to use.

Design strategies for retrieval systems need to pay particular attention to the interaction between users and the texts they retrieve. People's information seeking behavior needs to be analyzed in the problem solving context in which their information needs arise. For example, what are some of the common associations people make? What information is closely related? What different perspectives can a piece of information be viewed from? The answers to these questions often differ, depending critically on the nature of the task and the individuals performing the task. Task analysis, user modeling and interaction modeling are some of the strategies that can be used to improve the design of retrieval systems.

User-centered design of interface has been proposed by many researchers. Scholars have proposed that user interfaces should be designed such that they can support creativity of the users. Shneiderman have proposed the following activities that require powerful interfaces to support creative work. These activities include:

1. Searching and browsing digital libraries: Users should have more control over searching and browsing so that they can make use of their prior knowledge and can retrieve information that supports their creative activities.
2. Consulting with peers: Users may often consult with peers with new findings or research ideas and information is collected at different stages in the process of consultation. Different tools and techniques are also used for consultation.
3. Visualizing data and processes: Interfaces that support visualization of digital library contents are very useful and future works are necessary for smooth integration of the technologies.
4. Thinking for free association: Association of ideas by using related concept is a useful method of thinking and creativity. Various online tools are available to associate concepts, for example IdeaFisher, MindManager, and so on. Search interface should allow users to use these tools appropriately throughout the search process.
5. Disseminating Results: New information may be disseminated to different types of users. Digital libraries should allow users easily find workers in a field of study.

Thus research and evaluation are necessary to build systems that would support users in all activities related to their creativity. Scholars have suggested that comprehensive logs of online information use may be kept for analyzing user behavior, which may provide insights for developing the mechanism for information access and user interfaces in the future.

References

  1. Belkin, N.J., Oddy, R.N, Brooks, H.M. ASK for Information Retrieval: Parts I&2,Journal of Documentation, 38(2,3), 1982, pp. 61-71; 145-164.
  2. Card, S.K., Robertson, G.G. Mackinlay, J.D. The Information Visualizer, an Information Workspace, Human Factors in Computing Systems: CHI Ô91 Proceedings, ACM, 1991, pp.181- 194.
  3. Furnas, G.W., Landauer, T.K., Gomez, L.M., Dumais, S.T. The Vocabulary Problem in Human- System Communication, Communications of the ACM, 30(11), 1987, pp. 964-971.
  4. Kwasnik, B.H. How a Personal Document's Intended Use or Purpose Affects Its Classification in an Office, Proceedings SIGIR '89, ACM, 1989, pp. 207-210.
  5. Thompson, R.H., Croft, W.B. Support for Browsing in an Intelligent Text Retrieval System, International Journal of Man-Machine Studies, 30, 1989, pp. 639-668.
  6. Winograd, T, Flores, F. Understanding Computers and Cognition: A New Foundation for Design. Ablex, 1986.
  7. Chowdhury, G. C. (1999). Introduction to Modern Information Retrieval. London, Library Association Publishing.
  8. Henninger, S., & Belkin, N. J. (1996, April). Interface issues and interaction strategies for information retrieval systems. In Conference Companion on Human Factors in Computing Systems (pp. 352-353). ACM.

No comments: