Marknadens största urval
Snabb leverans

Böcker i Synthesis Lectures on Information Concepts, Retrieval, and Services-serien

Filter
Filter
Sortera efterSortera Serieföljd
  • av Xiaojun Yuan
    736 - 736,-

  • av Laura Koesten
    536,-

  • av Laurie J. Bonnici
    540,-

    This book provides a new model to explore discoverability and enhance the meaning of information. The authors have coined the term epidata, which includes items and circumstances that impact the expression of the data in a document, but are not part of the ordinary process of retrieval systems.  Epidata affords pathways and points to details that cast light on proximities that might otherwise go unknown.  In addition, epidata are clues to mis-and dis-information discernment.  There are many ways to find needed information; however, finding the most useable information is not an easy task.  The book explores the uses of proximity and the concept of epidata that increases the probability of  finding functional information.  The authors sketch a constellation of proximities, present examples of attempts to accomplish proximity, and provoke a discussion of the role of proximity in the field. In addition, the authors suggest that proximity is a thread between retrieval constructs based on known topics, predictable relations, and types of information seeking that lie outside constructs such as browsing, stumbling, encountering, detective work, art making, and translation.

  • av Reagan W. Moore
    536 - 636,-

    Genealogies document relationships between persons involved in historical events. Information about the events is parsed from communications from the past. This book explores a way to organize information from multiple communications into a trustworthy representation of a genealogical history of the modern world. The approach defines metrics for evaluating the consistency, correctness, closure, connectivity, completeness, and coherence of a genealogy. The metrics are evaluated using a 312,000-person research genealogy that explores the common ancestors of the royal families of Europe. A major result is that completeness is defined by a genealogy symmetry property driven by two exponential processes, the doubling of the number of potential ancestors each generation, and the rapid growth of lineage coalescence when the number of potential ancestors exceeds the available population. A genealogy expands from an initial root person to a large number of lineages, which then coalesce into a small number of progenitors. Using the research genealogy, candidate progenitors for persons of Western European descent are identified. A unifying ancestry is defined to which historically notable persons can be linked.

  • av Weili Guan
    1 216,-

    This book sheds light on state-of-the-art theories for more challenging outfit compatibility modeling scenarios.  In particular, this book presents several cutting-edge graph learning techniques that can be used for outfit compatibility modeling.  Due to its remarkable economic value, fashion compatibility modeling has gained increasing research attention in recent years.  Although great efforts have been dedicated to this research area, previous studies mainly focused on fashion compatibility modeling for outfits that only involved two items and overlooked the fact that each outfit may be composed of a variable number of items.  This book develops a series of graph-learning based outfit compatibility modeling schemes, all of which have been proven to be effective over several public real-world datasets.  This systematic approach benefits readers by introducing the techniques for compatibility modeling of outfits that involve a variable number of composing items.  To deal with the challenging task of outfit compatibility modeling, this book provides comprehensive solutions, including correlation-oriented graph learning, modality-oriented graph learning, unsupervised disentangled graph learning, partially supervised disentangled graph learning, and metapath-guided heterogeneous graph learning.  Moreover, this book sheds light on research frontiers that can inspire future research directions for scientists and researchers.  

  • av Rhiannon Bettivia
    606,-

    This book explores provenance, the study and documentation of how things come to be.  Traditionally defined as the origins, source, or ownership of an artifact, provenance today is not limited to historical domains.  It can be used to describe what did happen (retrospective provenance), what could happen (subjunctive provenance), or what will happen (prospective provenance). Provenance information is ubiquitous and abundant; for example, a wine label that details the winery, type of grape, and country of origin tells a provenance story that determines the value of the bottle.  This book presents select standards used in organizing provenance information and provides concrete examples on how to implement them.  Provenance transcends disciplines, and this book is intended for anyone who is interested in documenting workflows and recipes.  The goal is to empower readers to frame and answer provenance questions for their own work.  Provenance is increasingly important in computational workflows and e-sciences and addresses the need for a practical introduction to provenance documentation with simple-to-use multi-disciplinary examples and activities.  Case studies and examples address the creation of basic records using a variety of provenance metadata models, and the differences between PROV, ProvONE, and PREMIS are discussed.  Readers will gain an understanding of the uses of provenance metadata in different domains and sectors in order to make informed decisions on their use.  Documenting provenance can be a daunting challenge, and with clear examples and explanations, the task will be less intimidating to explore provenance needs.

  • av Xuemeng Song
    616,-

  • av Rishiraj Saha Roy
    730,-

    Question answering (QA) systems on the Web try to provide crisp answers to information needs posed in natural language, replacing the traditional ranked list of documents. QA, posing a multitude of research challenges, has emerged as one of the most actively investigated topics in information retrieval, natural language processing, and the artificial intelligence communities today. The flip side of such diverse and active interest is that publications are highly fragmented across several venues in the above communities, making it very difficult for new entrants to the field to get a good overview of the topic. Through this book, we make an attempt towards mitigating the above problem by providing an overview of the state-of-the-art in question answering. We cover the twin paradigms of curated Web sources used in QA tasks trusted text collections like Wikipedia, and objective information distilled into large-scale knowledge bases. We discuss distinct methodologies that have been applied to solve the QA problem in both these paradigms, using instantiations of recent systems for illustration. We begin with an overview of the problem setup and evaluation, cover notable sub-topics like open-domain, multi-hop, and conversational QA in depth, and conclude with key insights and emerging topics. We believe that this resource is a valuable contribution towards a unified view on QA, helping graduate students and researchers planning to work on this topic in the near future.

  • av Stone Maria
    740,-

    This book is intended for anyone interested in learning more about how search works and how it is evaluated. We all use search-it's a familiar utility. Yet, few of us stop and think about how search works, what makes search results good, and who, if anyone, decides what good looks like. Search has a long and glorious history, yet it continues to evolve, and with it, the measurement and our understanding of the kinds of experiences search can deliver continues to evolve, as well. We will discuss the basics of how search engines work, how humans use search engines, and how measurement works. Equipped with these general topics, we will then dive into the established ways of measuring search user experience, and their pros and cons. We will talk about collecting labels from human judges, analyzing usage logs, surveying end users, and even touch upon automated evaluation methods. After introducing different ways of collecting metrics, we will cover experimentation as it applies to search evaluation. The book will cover evaluating different aspects of search-from search user interface (UI), to results presentation, to the quality of search algorithms. In covering these topics, we will touch upon many issues in evaluation that became sources of controversy-from user privacy, to ethical considerations, to transparency, to potential for bias. We will conclude by contrasting measuring with understanding, and pondering the future of search evaluation.

  • av Pnina Fichman
    490,-

    Research on multiculturalism and information and communication technology (ICT) has been important to understanding recent history, planning for future large-scale initiatives, and understanding unrealized expectations for social and technological change. This interdisciplinary area of research has examined interactions between ICT and culture at the group and society levels. However, there is debate within the literature as to the nature of the relationship between culture and technology. In this synthesis, we suggest that the tensions result from the competing ideologies that drive researchers, allowing us to conceptualize the relationship between culture and ICT under three primary models, each with its own assumptions: 1) Social informatics, 2) Social determinism, and 3) Technological determinism. Social informatics views the relationship to be one of sociotechnical interaction, in which culture and ICTs affect each other mutually and iteratively, rather than linearly; the vast majority of the literature approach the relationships between ICT and culture under the assumptions of social informatics. From a socially deterministic perspective, ICTs are viewed as the dependent variable in the equation, whereas, from a technologically deterministic perspective, ICTs are an independent variable. The issues of multiculturalism and ICTs attracted much scholarly attention and have been explored under a myriad of contexts, with substantial literature on global development, social and political issues, business and public administration as well as education and scholarly collaboration. We synthesize here research in the areas of global development, social and political issues, and business collaboration. Finally we conclude by proposing under-explored areas for future research directions.

  • av Jennifer Pearson
    540,-

    Reading is a complex human activity that has evolved, and co-evolved, with technology over thousands of years. Mass printing in the fifteenth century firmly established what we know as the modern book, with its physical format of covers and paper pages, and now-standard features such as page numbers, footnotes, and diagrams. Today, electronic documents are enabling paperless reading supported by eReading technologies such as Kindles and Nooks, yet a high proportion of users still opt to print on paper before reading. This persistent habit of "e;printing to read"e; is one sign of the shortcomings of digital documents -- although the popularity of eReaders is one sign of the shortcomings of paper. How do we get the best of both worlds?The physical properties of paper (for example, it is light, thin, and flexible) contribute to the ease with which physical documents are manipulated; but these properties have a completely different set of affordances to their digital equivalents. Paper can be folded, ripped, or scribbled on almost subconsciously -- activities that require significant cognitive attention in their digital form, if they are even possible. The nearly subliminal interaction that comes from years of learned behavior with paper has been described as lightweight interaction, which is achieved when a person actively reads an article in a way that is so easy and unselfconscious that they are not apt to remember their actions later. Reading is now in a period of rapid change, and digital text is fast becoming the predominant mode of reading. As a society, we are merely at the start of the journey of designing truly effective tools for handling digital text. This book investigates the advantages of paper, how the affordances of paper can be realized in digital form, and what forms best support lightweight interaction for active reading. To understand how to design for the future, we review the ways reading technology and reader behavior have both changed and remained constant over hundreds of years. We explore the reasoning behind reader behavior and introduce and evaluate several user interface designs that implement these lightweight properties familiar from our everyday use of paper. We start by looking back, reviewing the development of reading technology and the progress of research on reading over many years. Drawing key concepts from this review, we move forward to develop and test methods for creating new and more effective interactions for supporting digital reading. Finally, we lay down a set of lightweight attributes which can be used as evidence-based guidelines to improve the usability of future digital reading technologies. By the end of this book, then, we hope you will be equipped to critique the present state of digital reading, and to better design and evaluate new interaction styles and technologies.

  • av Thomas Roelleke
    540,-

    Information Retrieval (IR) models are a core component of IR research and IR systems. The past decade brought a consolidation of the family of IR models, which by 2000 consisted of relatively isolated views on TF-IDF (Term-Frequency times Inverse-Document-Frequency) as the weighting scheme in the vector-space model (VSM), the probabilistic relevance framework (PRF), the binary independence retrieval (BIR) model, BM25 (Best-Match Version 25, the main instantiation of the PRF/BIR), and language modelling (LM). Also, the early 2000s saw the arrival of divergence from randomness (DFR). Regarding intuition and simplicity, though LM is clear from a probabilistic point of view, several people stated: "e;It is easy to understand TF-IDF and BM25. For LM, however, we understand the math, but we do not fully understand why it works."e;This book takes a horizontal approach gathering the foundations of TF-IDF, PRF, BIR, Poisson, BM25, LM, probabilistic inference networks (PIN's), and divergence-based models. The aim is to create a consolidated and balanced view on the main models. A particular focus of this book is on the "e;relationships between models."e; This includes an overview over the main frameworks (PRF, logical IR, VSM, generalized VSM) and a pairing of TF-IDF with other models. It becomes evident that TF-IDF and LM measure the same, namely the dependence (overlap) between document and query. The Poisson probability helps to establish probabilistic, non-heuristic roots for TF-IDF, and the Poisson parameter, average term frequency, is a binding link between several retrieval models and model parameters. Table of Contents: List of Figures / Preface / Acknowledgments / Introduction / Foundations of IR Models / Relationships Between IR Models / Summary & Research Outlook / Bibliography / Author's Biography / Index

  • av Preben Hansen
    730,-

    Society faces many challenges in workplaces, everyday life situations, and education contexts. Within information behavior research, there are often calls to bridge inclusiveness and for greater collaboration, with user-centered design approaches and, more specifically, participatory design practices. Collaboration and participation are essential in addressing contemporary societal challenges, designing creative information objects and processes, as well as developing spaces for learning, and information and research interventions. The intention is to improve access to information and the benefits to be gained from that. This also applies to bridging the digital divide and for embracing artificial intelligence. With regard to research and practices within information behavior, it is crucial to consider that all users should be involved. Many information activities (i.e., activities falling under the umbrella terms of information behavior and information practices) manifest through participation, and thus, methods such as participatory design may help unfold both information behavior and practices as well as the creation of information objects, new models, and theories. Information sharing is one of its core activities. For participatory design with its value set of democratic, inclusive, and open participation towards innovative practices in a diversity of contexts, it is essential to understand how information activities such as sharing manifest itself. For information behavior studies it is essential to deepen understanding of how information sharing manifests in order to improve access to information and the use of information. Third Space is a physical, virtual, cognitive, and conceptual space where participants may negotiate, reflect, and form new knowledge and worldviews working toward creative, practical and applicable solutions, finding innovative, appropriate research methods, interpreting findings, proposing new theories, recommending next steps, and even designing solutions such as new information objects or services. Information sharing in participatory design manifests in tandem with many other information interaction activities and especially information and cognitive processing. Although there are practices of individual information sharing and information encountering, information sharing mostly relates to collaborative information behavior practices, creativity, and collective decision-making. Our purpose with this book is to enable students, researchers, and practitioners within a multi-disciplinary research field, including information studies and Human-Computer Interaction approaches, to gain a deeper understanding of how the core activity of information sharing in participatory design, in which Third Space may be a platform for information interaction, is taking place when using methods utilized in participatory design to address contemporary societal challenges. This could also apply for information behavior studies using participatory design as methodology. We elaborate interpretations of core concepts such as participatory design, Third Space, information sharing, and collaborative information behavior, before discussing participatory design methods and processes in more depth. We also touch on information behavior, information practice, and other important concepts. Third Space, information sharing, and information interaction are discussed in some detail. A framework, with Third Space as a core intersecting zone, platform, and adaptive and creative space to study information sharing and other information behavior and interactions are suggested. As a tool to envision information behavior and suggest future practices, participatory design serves as a set of methods and tools in which new interpretations of the design of information behavior studies and eventually new information objects are being initiated involving multiple stakeholders in future information landscapes. For this purpose, we argue that Third Space can be used as an intersection zone to study information sharing and other information activities, but more importantly it can serve as a Third Space Information Behavior (TSIB) study framework where participatory design methodology and processes are applied to information behavior research studies and applications such as information objects, systems, and services with recognition of the importance of situated awareness.

  • av Chirag Shah
    666,-

    While great strides have been made in the field of search and recommendation, there are still challenges and opportunities to address information access issues that involve solving tasks and accomplishing goals for a wide variety of users. Specifically, we lack intelligent systems that can detect not only the request an individual is making (what), but also understand and utilize the intention (why) and strategies (how) while providing information and enabling task completion. Many scholars in the fields of information retrieval, recommender systems, productivity (especially in task management and time management), and artificial intelligence have recognized the importance of extracting and understanding people's tasks and the intentions behind performing those tasks in order to serve them better. However, we are still struggling to support them in task completion, e.g., in search and assistance, and it has been challenging to move beyond single-query or single-turn interactions. The proliferation of intelligent agents has unlocked new modalities for interacting with information, but these agents will need to be able to work understanding current and future contexts and assist users at task level. This book will focus on task intelligence in the context of search and recommendation. Chapter 1 introduces readers to the issues of detecting, understanding, and using task and task-related information in an information episode (with or without active searching). This is followed by presenting several prominent ideas and frameworks about how tasks are conceptualized and represented in Chapter 2. In Chapter 3, the narrative moves to showing how task type relates to user behaviors and search intentions. A task can be explicitly expressed in some cases, such as in a to-do application, but often it is unexpressed. Chapter 4 covers these two scenarios with several related works and case studies. Chapter 5 shows how task knowledge and task models can contribute to addressing emerging retrieval and recommendation problems. Chapter 6 covers evaluation methodologies and metrics for task-based systems, with relevant case studies to demonstrate their uses. Finally, the book concludes in Chapter 7, with ideas for future directions in this important research area.

  • av Michael Thelwall
    800,-

    Many research projects involve analyzing sets of texts from the social web or elsewhere to get insights into issues, opinions, interests, news discussions, or communication styles. For example, many studies have investigated reactions to Covid-19 social distancing restrictions, conspiracy theories, and anti-vaccine sentiment on social media. This book describes word association thematic analysis, a mixed methods strategy to identify themes within a collection of social web or other texts. It identifies these themes in the differences between subsets of the texts, including female vs. male vs. nonbinary, older vs. newer, country A vs. country B, positive vs. negative sentiment, high scoring vs. low scoring, or subtopic A vs. subtopic B. It can also be used to identify the differences between a topic-focused collection of texts and a reference collection. The method starts by automatically finding words that are statistically significantly more common in one subset than another, then identifies the context of these words and groups them into themes. It is supported by the free Windows-based software Mozdeh for data collection or importing and for the quantitative analysis stages. This book explains the word association thematic analysis method, with examples, and gives practical advice for using it. It is primarily intended for social media researchers and students, although the method is applicable to any collection of short texts.

  • av David Hawking
    796,-

    Simulated test collections may find application in situations where real datasets cannot easily be accessed due to confidentiality concerns or practical inconvenience. They can potentially support Information Retrieval (IR) experimentation, tuning, validation, performance prediction, and hardware sizing. Naturally, the accuracy and usefulness of results obtained from a simulation depend upon the fidelity and generality of the models which underpin it. The fidelity of emulation of a real corpus is likely to be limited by the requirement that confidential information in the real corpus should not be able to be extracted from the emulated version. We present a range of methods exploring trade-offs between emulation fidelity and degree of preservation of privacy.We present three different simple types of text generator which work at a micro level: Markov models, neural net models, and substitution ciphers. We also describe macro level methods where we can engineer macro properties of a corpus, giving a range of models for each of the salient properties: document length distribution, word frequency distribution (for independent and non-independent cases), word length and textual representation, and corpus growth.We present results of emulating existing corpora and for scaling up corpora by two orders of magnitude. We show that simulated collections generated with relatively simple methods are suitable for some purposes and can be generated very quickly. Indeed it may sometimes be feasible to embed a simple lightweight corpus generator into an indexer for the purpose of efficiency studies.Naturally, a corpus of artificial text cannot support IR experimentation in the absence of a set of compatible queries. We discuss and experiment with published methods for query generation and query log emulation.We present a proof-of-the-pudding study in which we observe the predictive accuracy of efficiency and effectiveness results obtained on emulated versions of TREC corpora. The study includes three open-source retrieval systems and several TREC datasets. There is a trade-off between confidentiality and prediction accuracy and there are interesting interactions between retrieval systems and datasets. Our tentative conclusion is that there are emulation methods which achieve useful prediction accuracy while providing a level of confidentiality adequate for many applications.Many of the methods described here have been implemented in the open source project SynthaCorpus, accessible at: https://bitbucket.org/davidhawking/synthacorpus/

  • av Anderson A. Ferreira
    730,-

    This book deals with a hard problem that is inherent to human language: ambiguity. In particular, we focus on author name ambiguity, a type of ambiguity that exists in digital bibliographic repositories, which occurs when an author publishes works under distinct names or distinct authors publish works under similar names. This problem may be caused by a number of reasons, including the lack of standards and common practices, and the decentralized generation of bibliographic content. As a consequence, the quality of the main services of digital bibliographic repositories such as search, browsing, and recommendation may be severely affected by author name ambiguity. The focal point of the book is on automatic methods, since manual solutions do not scale to the size of the current repositories or the speed in which they are updated. Accordingly, we provide an ample view on the problem of automatic disambiguation of author names, summarizing the results of more than a decade of research on this topic conducted by our group, which were reported in more than a dozen publications that received over 900 citations so far, according to Google Scholar. We start by discussing its motivational issues (Chapter 1). Next, we formally define the author name disambiguation task (Chapter 2) and use this formalization to provide a brief, taxonomically organized, overview of the literature on the topic (Chapter 3). We then organize, summarize and integrate the efforts of our own group on developing solutions for the problem that have historically produced state-of-the-art (by the time of their proposals) results in terms of the quality of the disambiguation results. Thus, Chapter 4 covers HHC - Heuristic-based Clustering, an author name disambiguation method that is based on two specific real-world assumptions regarding scientific authorship. Then, Chapter 5 describes SAND - Self-training Author Name Disambiguator and Chapter 6 presents two incremental author name disambiguation methods, namely INDi - Incremental Unsupervised Name Disambiguation and INC- Incremental Nearest Cluster. Finally, Chapter 7 provides an overview of recent author name disambiguation methods that address new specific approaches such as graph-based representations, alternative predefined similarity functions, visualization facilities and approaches based on artificial neural networks. The chapters are followed by three appendices that cover, respectively: (i) a pattern matching function for comparing proper names and used by some of the methods addressed in this book; (ii) a tool for generating synthetic collections of citation records for distinct experimental tasks; and (iii) a number of datasets commonly used to evaluate author name disambiguation methods. In summary, the book organizes a large body of knowledge and work in the area of author name disambiguation in the last decade, hoping to consolidate a solid basis for future developments in the field.

  • av Brian C O'Connor
    450,-

    For over a century, motion pictures have entertained us, occasionally educated us, and even served a few specialized fields of study. Now, however, with the precipitous drop in prices and increase in image quality, motion pictures are as widespread as paperback books and postcards once were. Yet, theories and practices of analysis for particular genres and analytical stances, definitions, concepts, and tools that span platforms have been wanting. Therefore, we developed a suite of tools to enable close structural analysis of the time-varying signal set of a movie. We take an information-theoretic approach (message is a signal set) generated (coded) under various antecedents (sent over some channel) decoded under some other set of antecedents. Cultural, technical, and personal antecedents might favor certain message-making systems over others. The same holds true at the recipient end-yet, the signal set remains the signal set. In order to discover how movies work-their structure and meaning-we honed ways to provide pixel level analysis, forms of clustering, and precise descriptions of what parts of a signal influence viewer behavior. We assert that analysis of the signal set across the evolution of film-from Edison to Hollywood to Brakhage to cats on social media-yields a common ontology with instantiations (responses to changes in coding and decoding antecedents).

  • av Jiqun Liu
    706,-

    Since user study design has been widely applied in search interactions and information retrieval (IR) systems evaluation studies, a deep reflection and meta-evaluation of interactive IR (IIR) user studies is critical for sharpening the instruments of IIR research and improving the reliability and validity of the conclusions drawn from IIR user studies. To this end, we developed a faceted framework for supporting user study design, reporting, and evaluation based on a systematic review of the state-of-the-art IIR research papers recently published in several top IR venues (n=462). Within the framework, we identify three major types of research focuses, extract and summarize facet values from specific cases, and highlight the under-reported user study components which may significantly affect the results of research. Then, we employ the faceted framework in evaluating a series of IIR user studies against their respective research questions and explain the roles and impacts of the underlying connections and "e;collaborations"e; among different facet values. Through bridging diverse combinations of facet values with the study design decisions made for addressing research problems, the faceted framework can shed light on IIR user study design, reporting, and evaluation practices and help students and young researchers design and assess their own studies.

  • av Robert M. Losee
    410,-

    Information Retrieval performance measures are usually retrospective in nature, representing the effectiveness of an experimental process. However, in the sciences, phenomena may be predicted, given parameter values of the system. After developing a measure that can be applied retrospectively or can be predicted, performance of a system using a single term can be predicted given several different types of probabilistic distributions. Information Retrieval performance can be predicted with multiple terms, where statistical dependence between terms exists and is understood. These predictive models may be applied to realistic problems, and then the results may be used to validate the accuracy of the methods used. The application of metadata or index labels can be used to determine whether or not these features should be used in particular cases. Linguistic information, such as part-of-speech tag information, can increase the discrimination value of existing terminology and can be studied predictively.This work provides methods for measuring performance that may be used predictively. Means of predicting these performance measures are provided, both for the simple case of a single term in the query and for multiple terms. Methods of applying these formulae are also suggested.

  • av Virginia Dressler
    816,-

    As digital collections continue to grow, the underlying technologies to serve up content also continue to expand and develop. As such, new challenges are presented which continue to test ethical ideologies in everyday environs of the practitioner. There are currently no solid guidelines or overarching codes of ethics to address such issues. The digitization of modern archival collections, in particular, presents interesting conundrums when factors of privacy are weighed and reviewed in both small and mass digitization initiatives. Ethical decision making needs to be present at the onset of project planning in digital projects of all sizes, and we also need to identify the role and responsibility of the practitioner to make more virtuous decisions on behalf of those with no voice or awareness of potential privacy breaches.In this book, notions of what constitutes private information are discussed, as is the potential presence of such information in both analog and digital collections. This book lays groundwork to introduce the topic of privacy within digital collections by providing some examples from documented real-world scenarios and making recommendations for future research.A discussion of the notion privacy as concept will be included, as well as some historical perspective (with perhaps one the most cited work on this topic, for example, Warren and Brandeis' "e;Right to Privacy,"e; 1890). Concepts from the The Right to Be Forgotten case in 2014 (Google Spain SL, Google Inc. v Agencia Espala de Proteccin de Datos, Mario Costeja Gonzlez) are discussed as to how some lessons may be drawn from the response in Europe and also how European data privacy laws have been applied. The European ideologies are contrasted with the Right to Free Speech in the First Amendment in the U.S., highlighting the complexities in setting guidelines and practices revolving around privacy issues when applied to real life scenarios. Two ethical theories are explored: Consequentialism and Deontological. Finally, ethical decision making models will also be applied to our framework of digital collections. Three case studies are presented to illustrate how privacy can be defined within digital collections in some real-world examples.

  • av Dan Wu
    770,-

    With the rapid development of mobile Internet and smart personal devices in recent years, mobile search has gradually emerged as a key method with which users seek online information. In addition, cross-device search also has been regarded recently as an important research topic. As more mobile applications (APPs) integrate search functions, a user's mobile search behavior on different APPs becomes more significant. This book provides a systematic review of current mobile search analysis and studies user mobile search behavior from several perspectives, including mobile search context, APP usage, and different devices. Two different user experiments to collect user behavior data were conducted. Then, through the data from user mobile phone usage logs in natural settings, we analyze the mobile search strategies employed and offer a context-based mobile search task collection, which then can be used to evaluate the mobile search engine. In addition, we combine mobile search with APP usage to give more in-depth analysis, such as APP transition in mobile search and follow-up actions triggered by mobile search. The study, combining the mobile search with APP usage, can contribute to the interaction design of APPs, such as the search recommendation and APP recommendation. Addressing the phenomenon of users owning more smart devices today than ever before, we focus on user cross device search behavior. We model the information preparation behavior and information resumption behavior in cross-device search and evaluate the search performance in cross-device search. Research on mobile search behaviors across different devices can help to understand online user information behavior comprehensively and help users resume their search tasks on different devices.

  • av Naresh Kumar Agarwal
    666,-

    The field of human information behavior runs the gamut of processes from the realization of a need or gap in understanding, to the search for information from one or more sources to fill that gap, to the use of that information to complete a task at hand or to satisfy a curiosity, as well as other behaviors such as avoiding information or finding information serendipitously. Designers of mechanisms, tools, and computer-based systems to facilitate this seeking and search process often lack a full knowledge of the context surrounding the search. This context may vary depending on the job or role of the person; individual characteristics such as personality, domain knowledge, age, gender, perception of self, etc.; the task at hand; the source and the channel and their degree of accessibility and usability; and the relationship that the seeker shares with the source. Yet researchers have yet to agree on what context really means. While there have been various research studies incorporating context, and biennial conferences on context in information behavior, there lacks a clear definition of what context is, what its boundaries are, and what elements and variables comprise context.In this book, we look at the many definitions of and the theoretical and empirical studies on context, and I attempt to map the conceptual space of context in information behavior. I propose theoretical frameworks to map the boundaries, elements, and variables of context. I then discuss how to incorporate these frameworks and variables in the design of research studies on context. We then arrive at a unified definition of context. This book should provide designers of search systems a better understanding of context as they seek to meet the needs and demands of information seekers. It will be an important resource for researchers in Library and Information Science, especially doctoral students looking for one resource that covers an exhaustive range of the most current literature related to context, the best selection of classics, and a synthesis of these into theoretical frameworks and a unified definition. The book should help to move forward research in the field by clarifying the elements, variables, and views that are pertinent. In particular, the list of elements to be considered, and the variables associated with each element will be extremely useful to researchers wanting to include the influences of context in their studies.

  • av Lori McCay-Peet
    570,-

    Chance, luck, and good fortune are the usual go-to descriptors of serendipity, a phenomenon aptly often coupled with famous anecdotes of accidental discoveries in engineering and science in modern history such as penicillin, Teflon, and Post-it notes. Serendipity, however, is evident in many fields of research, in organizations, in everyday life-and there is more to it than luck implies. While the phenomenon is strongly associated with in person interactions with people, places, and things, most attention of late has focused on its preservation and facilitation within digital information environments. Serendipity's association with unexpected, positive user experiences and outcomes has spurred an interest in understanding both how current digital information environments support serendipity and how novel approaches may be developed to facilitate it. Research has sought to understand serendipity, how it is manifested in people's personality traits and behaviors, how it may be facilitated in digital information environments such as mobile applications, and its impacts on an individual, an organizational, and a wider level. Because serendipity is expressed and understood in different ways in different contexts, multiple methods have been used to study the phenomenon and evaluate digital information environments that may support it. This volume brings together different disciplinary perspectives and examines the motivations for studying serendipity, the various ways in which serendipity has been approached in the research, methodological approaches to build theory, and how it may be facilitated. Finally, a roadmap for serendipity research is drawn by integrating key points from this volume to produce a framework for the examination of serendipity in digital information environments.

  • av Michael J. Paul
    816,-

    Public health thrives on high-quality evidence, yet acquiring meaningful data on a population remains a central challenge of public health research and practice. Social monitoring, the analysis of social media and other user-generated web data, has brought advances in the way we leverage population data to understand health. Social media offers advantages over traditional data sources, including real-time data availability, ease of access, and reduced cost. Social media allows us to ask, and answer, questions we never thought possible.This book presents an overview of the progress on uses of social monitoring to study public health over the past decade. We explain available data sources, common methods, and survey research on social monitoring in a wide range of public health areas. Our examples come from topics such as disease surveillance, behavioral medicine, and mental health, among others. We explore the limitations and concerns of these methods. Our survey of this exciting new field of data-driven research lays out future research directions.

  • av Wei Ding
    666,-

    Information Architecture is about organizing and simplifying information, designing and integrating information spaces/systems, and creating ways for people to find and interact with information content. Its goal is to help people understand and manage information and make the right decisions accordingly. This updated and revised edition of the book looks at integrated information spaces in the web context and beyond, with a focus on putting theories and principles into practice.In the ever-changing social, organizational, and technological contexts, information architects not only design individual information spaces (e.g., websites, software applications, and mobile devices), but also tackle strategic aggregation and integration of multiple information spaces across websites, channels, modalities, and platforms. Not only do they create predetermined navigation pathways, but they also provide tools and rules for people to organize information on their own and get connected with others.Information architects work with multi-disciplinary teams to determine the user experience strategy based on user needs and business goals, and make sure the strategy gets carried out by following the user-centered design (UCD) process via close collaboration with others. Drawing on the authors' extensive experience as HCI researchers, User Experience Design practitioners, and Information Architecture instructors, this book provides a balanced view of the IA discipline by applying theories, design principles, and guidelines to IA and UX practices. It also covers advanced topics such as iterative design, UX decision support, and global and mobile IA considerations. Major revisions include moving away from a web-centric view toward multi-channel, multi-device experiences. Concepts such as responsive design, emerging design principles, and user-centered methods such as Agile, Lean UX, and Design Thinking are discussed and related to IA processes and practices.

  • av Borchuluun Yadamsuren
    490,-

    Rapid technological changes and availability of news anywhere and at any moment have changed how people seek out news. Increasingly, consumers no longer take deliberate actions to read the news, instead stumbling upon news online. While the emergence of serendipitous news discovery online has been recognized in the literature, there is a limited understanding about how people experience this behavior. Based on the mixed method study that investigated online news reading behavior of residents in a Midwestern U.S. town, we explore how people accidentally discover news when engaged in various online activities. Employing the grounded theory approach, we define Incidental Exposure to Online News (IEON) as individual's memorable experiences of chance encounters with interesting, useful, or surprising news while using the Internet for news browsing or for non-news-related online activities, such as checking email or visiting social networking sites. The book presents a conceptual framework of IEON that advances research and an understanding of serendipitous news discovery from people's holistic experiences of news consumption in their everyday lives. The proposed IEON Process Model identifies key steps in an IEON experience that could help news reporters and developers of online news platforms create innovative storytelling and design strategies to catch consumers' attention during their online activities. Finally, this book raises important methodological questions for further investigation: how should serendipitous news discovery be studied, measured, and observed, and what are the essential elements that differentiate this behavior from other types of online news consumption and information behaviors?

  • av Michael Thelwall
    770,-

    In recent years there has been an increasing demand for research evaluation within universities and other research-based organisations. In parallel, there has been an increasing recognition that traditional citation-based indicators are not able to reflect the societal impacts of research and are slow to appear. This has led to the creation of new indicators for different types of research impact as well as timelier indicators, mainly derived from the Web. These indicators have been called altmetrics, webometrics or just web metrics. This book describes and evaluates a range of web indicators for aspects of societal or scholarly impact, discusses the theory and practice of using and evaluating web indicators for research assessment and outlines practical strategies for obtaining many web indicators. In addition to describing impact indicators for traditional scholarly outputs, such as journal articles and monographs, it also covers indicators for videos, datasets, software and other non-standard scholarly outputs. The book describes strategies to analyse web indicators for individual publications as well as to compare the impacts of groups of publications. The practical part of the book includes descriptions of how to use the free software Webometric Analyst to gather and analyse web data. This book is written for information science undergraduate and Master's students that are learning about alternative indicators or scientometrics as well as Ph.D. students and other researchers and practitioners using indicators to help assess research impact or to study scholarly communication.

  • av Reagan W. Moore
    686,-

    A trustworthy repository provides assurance in the form of management documents, event logs, and audit trails that digital objects are being managed correctly. The assurance includes plans for the sustainability of the repository, the accession of digital records, the management of technology evolution, and the mitigation of the risk of data loss. A detailed assessment is provided by the ISO-16363:2012 standard, "e;Space data and information transfer systems-Audit and certification of trustworthy digital repositories."e; This book examines whether the ISO specification for trustworthiness can be enforced by computer actionable policies. An implementation of the policies is provided and the policies are sorted into categories for procedures to manage externally generated documents, specify repository parameters, specify preservation metadata attributes, specify audit mechanisms for all preservation actions, specify control of preservation operations, and control preservation properties as technology evolves. An application of the resulting procedures is made to enforce trustworthiness within National Science Foundation data management plans.

  • av Grace Hui Yang
    656,-

    Big data and human-computer information retrieval (HCIR) are changing IR. They capture the dynamic changes in the data and dynamic interactions of users with IR systems. A dynamic system is one which changes or adapts over time or a sequence of events. Many modern IR systems and data exhibit these characteristics which are largely ignored by conventional techniques. What is missing is an ability for the model to change over time and be responsive to stimulus. Documents, relevance, users and tasks all exhibit dynamic behavior that is captured in data sets typically collected over long time spans and models need to respond to these changes. Additionally, the size of modern datasets enforces limits on the amount of learning a system can achieve. Further to this, advances in IR interface, personalization and ad display demand models that can react to users in real time and in an intelligent, contextual way. In this book we provide a comprehensive and up-to-date introduction to Dynamic Information Retrieval Modeling, the statistical modeling of IR systems that can adapt to change. We define dynamics, what it means within the context of IR and highlight examples of problems where dynamics play an important role. We cover techniques ranging from classic relevance feedback to the latest applications of partially observable Markov decision processes (POMDPs) and a handful of useful algorithms and tools for solving IR problems incorporating dynamics. The theoretical component is based around the Markov Decision Process (MDP), a mathematical framework taken from the field of Artificial Intelligence (AI) that enables us to construct models that change according to sequential inputs. We define the framework and the algorithms commonly used to optimize over it and generalize it to the case where the inputs aren't reliable. We explore the topic of reinforcement learning more broadly and introduce another tool known as a Multi-Armed Bandit which is useful for cases where exploring model parameters is beneficial. Following this we introduce theories and algorithms which can be used to incorporate dynamics into an IR model before presenting an array of state-of-the-art research that already does, such as in the areas of session search and online advertising. Change is at the heart of modern Information Retrieval systems and this book will help equip the reader with the tools and knowledge needed to understand Dynamic Information Retrieval Modeling.

Gör som tusentals andra bokälskare

Prenumerera på vårt nyhetsbrev för att få fantastiska erbjudanden och inspiration för din nästa läsning.