DIMACS Workshop on Social and Collaborative Information Seeking (SCIS)

May 14 - 15, 2015
DIMACS Center, CoRE Building, Rutgers University

Organizers:
Chirag Shah, Rutgers University, chirags at rutgers.edu
Rob Capra, University of North Carolina, Chapel Hill, rcapra at unc.edu
Preben Hansen, Stockholm University, preben at dsv.su.se
Presented under the auspices of the Special Focus on Information Sharing and Dynamic Data Analysis.

Abstracts:


Mark Ackerman, University of Michigan, Ann Arbor

Title: Collaborative Information Access in Health

People with chronic medical conditions, such as diabetes or depression, have long-term information needs. These conditions persist. Diabetes cannot be cured, only controlled over one's lifetime, and depression can wax and wane over one's lifetime. Many other chronic conditions, with similar information needs, exist.

Information seeking for these conditions occurs as a combination of individual and social information gathering, often in a rich ecology of information sources. Information gathering may be proactive, but it can also be quite passive. Often, information is gathered casually for later use, but it can also be garnered when a crisis occurs or when the condition changes for the individual. Often the need is very contextualized, as must be the information, since it is peculiar one's lived experience - the specifics of one's body and one's socio-economic conditions.

Medical situations, in general, are exceeding complex and often involve large numbers of clinicians and auxiliary personnel, as well as the family, extended family, and friends. As one gets older, the number of conditions and co-occurring conditions increases, increasing the complexity for the user. The average Medicare patient has 23 clinicians he/she is trying to juggle.

We are currently exploring a number of projects to help people sense-make their conditions and the information they are gathering from their various social worlds. Our research group has explored diabetes and hypertension in an underserved community, diabetes support, depression monitoring, information scaffolding in adult bone marrow transplant (BMT), information artifacts in pediatric BMT, and the use of socially-derived "translations" in order to understand medical advice more clearly. We have also examined in-hospital information exchanges among nurses and doctors, and we will shortly be starting studies of spinal cord injury patients and community-based depression management. We are also constructing a video-based prototype so as to help people navigate the differing and sometimes conflicting advice they get from their various communities and social worlds as well as a tablet-based system to help in- patient BMT caregivers understand their child's condition.

Mark Ackerman is the George Herbert Mead Collegiate Professor of Human-Computer Interaction and a Professor in the School of Information and in the Department of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor. His major research area is Human-Computer Interaction (HCI), primarily Computer-Supported Cooperative Work (CSCW). He has published widely in HCI and CSCW, investigating collaborative information access in online knowledge communities, medical settings, expertise sharing, and most recently, pervasive environments. Mark is a member of the CHI Academy (HCI Fellow) and an ACM Fellow.


Rob Capra, University of North Carolina, Chapel Hill

Title: System Support for Collaborative Information Seeking

In collaborative information seeking, collaborators not only conduct searches for information, but also must coordinate their activities, including planning, communicating results, monitoring progress, creating shared representations of structure, and performing synthesis of findings (Morris & Teevan, 2010; Poltrock et al., 2003). Prior research on collaborative search has investigated systems that help groups communicate and coordinate activities during collaborative search. In this short talk, I will discuss several ways that systems could provide more "structural" support for collaborative search. Explicit system support for shared, modifiable representations of task and results structure could help teams in planning, searching, and sensemaking during a collaborative search process. This structure could also be leveraged by search algorithms to present results that match particular sub-goals. My goal is to present ideas that will generate discussion at the workshop.


Kaitlin Light Costello, University of North Carolina, Chapel Hill

Title: Collaborative crosschecking: Patients teaching patients how to evaluate health information in online support groups for chronic kidney disease

Patients living with lifelong health conditions often search for information about their health throughout their illness trajectory (Johnson & Case, 2012). Many patients diagnosed with chronic conditions are increasingly turning to the Internet for health information (Fox & Duggan, 2013), where they may encounter online support groups [OSGs] dedicated to their chronic condition. In OSGs, patients routinely exchange information and social support with one another as they come to terms with their diagnosis, make treatment decisions, and learn what to expect as their illness progresses. Healthcare providers are often concerned that patients will find misinformation both in OSGs and on static websites, and they may even deter patients from using the Internet as an information source because of concerns about credibility (Chung, 2013). In OSGs, misinformation is often corrected relatively quickly by other users (Esquivel, Meric-Bernstam, & Bernstam, 2006).

However, my research suggests that users do not merely correct misinformation when they encounter it in an OSG. Participants in a recently completed two-year grounded theory study examining the information behaviors of patients diagnosed with chronic kidney disease [CKD] in OSGs attempt to teach other users how to evaluate the credibility of health information posted in OSGs when they encounter misinformation. This is a process that I call collaborative crosschecking. Crosschecking is common among my participants, and occurs when one person consults multiple sources of information in order to verify information. Collaborative crosschecking, therefore, is a collaborative information literacy practice whereby users attempt to teach other users how to verify information by sharing their own crosschecking techniques. That is, my participants do not simply correct misinformation when they encounter it in OSGs: they walk through their own evaluation process in an attempt to teach others to evaluate the trustworthiness of information. This often results in additional users adding their own sources and giving their own tips for crosschecking. Interestingly, my participants tell me that they are careful when they engage in collaborative crosschecking: they use gentle language, remind users that "everyone is different" and that not all information about CKD applies to everyone, and often provide multiple references to back up their claims.

Collaborative crosschecking serves multiple functions: it refutes the misinformation from the original post, offers evidence supporting the correct information, and fosters an understanding of how to evaluate information by offering clear instructions. This collaborative information literacy practice likely extends beyond the health domain. For example, it is similar but not identical to the "call and avalanche" pattern of receiving answers to questions in online forums for massively multiplayer online games (Martin & Steinkuehler, 2010). Additional research is necessary to determine whether collaborative crosschecking occurs in other domains. Further research must also explore the effectiveness of collaborative crosschecking in disseminating information literacy skills and in correcting misinformation.

References

Chung, J. E. (2013). Patientprovider discussion of online health information: Results from the 2007 Health 
Information National Trends Survey (HINTS). Journal of Health Communication, 18(6), 627648. 

Esquivel, A., Meric-Bernstam, F., & Bernstam, E. V. (2006). Accuracy and self correction of information received 
from an internet breast cancer list: content analysis. BMJ, 332(7547), 939942. 

Fox, S., & Duggan, M. (2013). Health online 2013 (p. 55). Washington, D.C.: Pew Research Center's Internet & 
American Life Project. Retrieved from http://www.pewinternet.org/files/old-
media//Files/Reports/PIP_HealthOnline.pdf (Archived by WebCite at 
http://www.webcitation.org/6Wsi5u8S9).

Johnson, J. D., & Case, D. O. (2012). Health information seeking. Peter Lang Publishing.

Martin, C., & Steinkuehler, C. (2010). Collective information literacy in massively multiplayer online games. E-
Learning and Digital Media, 7(4), 355365.

Roberto Gonzalez-Ibanez, Universidad de Santiago de Chile

Title: Affective Dimension in Collaborative Information Seeking

Emotions and other affective processes have long been considered essential elements in people's lives. Despite emotion research conducted in different domains, little is known about the role of the affective dimension in the information search process of teams. Researchers have shown an active role of affective processes such as feelings and emotions in individual information seeking, however such findings do not necessarily apply to collaborative settings. This talk is aimed to reflect about the importance and challenges of research on the affective dimension in collaborative information seeking (CIS). To achieve this goal the talk is structured in three parts. First of all, an overview of the relevance of the affective dimension is provided. Second, some key findings from research on the affective dimension in individual information seeking in general, and CIS in particular are presented. Finally, research approaches, challenges, and ethical aspects in this type of studies are discussed. This talks hopes to encourage CIS researchers to explore the affective dimension in their studies, formulate new research questions and hypotheses, and share their findings with the community.

About the author

Roberto Gonzalez-Ibanez is an Assistant Professor at Universidad de Santiago de Chile (USACH). He received his PhD degree in 2013 from Rutgers University. His dissertation focused on the role of initial affective processes in individual and collaborative information seeking. Currently he is working on research projects involving collaboration, affective dimension, social media, web observatories, and human information interaction.


Daqing He, University of Pittsburgh

Title: Context-Sensitive Supports for Collaborative Information Retrieval

This brief talk presents our recent efforts on exploring the effectiveness of various search contexts in supporting collaborative information retrieval (CIR). We are particularly interesting in comprehending the unique search contexts available in CIR. Therefore, we examined the effectiveness of the search context drawing from not only the user's own search history, but also the partner's search history, as well as the team's explicit collaboration behaviors their chats. Our results demonstrate that context-sensitive CIR has its own uniqueness in considering the user's own and the partner's search history, but more importantly it can draw various contributions from the chats between team members.


Simon Knight, Open University, UK

Title: Collaborative Information Seeking Tasks as Complex Performance Assessments

My work has focussed on the development of 2 (CIS) tasks Collaborative Information Seeking, and Collaborative Multiple Document Processing for the purposes of developing a performance assessment for higher-level literacy. The research uses the Coagmento browser addon in a novel, and large scale, context. Students were asked to work in pairs to write summaries of 'the best supported claims' regarding a contested-scientific issue. Analysis focuses on the relationship between CIS processes and learning outcomes.


Christopher Leeder, Rutgers University

Title: Library research as collaborative information seeking

Today's students are accustomed to collaborative information behavior, with group work being a common requirement in educational settings. This talk presents the results of a study students conducting collaborative research using library resources. Participants used the Coagmento collaborative search system in a library lab while working on a class assignment. The results demonstrate that there are benefits and drawbacks to collaborative information seeking. Findings showed that students working collaboratively found more useful sources and achieved greater information coverage, while individuals showed better results for query effectiveness and amount of relevant sources. Challenges that students face when conducting library research were identified. The findings of this study offer suggestions on how to support group work, and how collaborative search systems can address the challenges faced by students doing group work using library resources.


Javed Mostafa, University of North Carolina at Chapel Hill

Title: Collaborative Search Challenges for Adaptive and Personalized Search for the Elderly

Health information seeking, broadly, is one of the most popular usages of the Web (http://www.pewinternet.org/fact-sheets/health-fact-sheet/). In consumer-oriented health information seeking research there is a lack of attention to information seeking challenges that elderly users face. While seeking health information online, elderly users often involve and depend on other "co-consumers" of such information, particularly their caregivers and their physicians. Caregivers or physicians sometimes find themselves in the role of search result "interpreters", whereby they have to explain the search output or suggest improved search strategies. It is not uncommon for caregivers or physicians, on occasion, to engage in searching on behalf of elderly users. There is currently very little understanding of the dynamics of such collaborations in information seeking and consequently it is even rarer to find any search tools and systems designed specifically for such collaborative usage.

For the past 15 years or so, we have investigated a variety of different dimensions and challenges associated with personalized information retrieval. With the focus on delivering timely and highly accurate search results, we developed techniques and systems for content representation, profile acquisition, and interaction designs. One major area we investigated is machine learning approaches for combining cumulative knowledge from user profiles in collaborative or social information retrieval environments. Identifying interest profile similarities in social (networked) environments can be useful for a wide variety of purposes, for e.g., to address "cold start" challenges in profile acquisition or to discover cohorts of users that share interest in topics of mutual interest. Among many techniques we applied, a primary one we employed involved multi-agent learning to support personalization among groups of users.

Recently, we started exploring the problems associated with serving elderly searchers' needs by using a similar multi-agent paradigm. The vision is to acquire a set of three profiles, i.e., a profile-triplet, one for the elderly end-user, one for the caregiver, and one for the physician. The initial data for the profile-triplet is acquired from the context of an electronic health record, which represents the general health profile of an elderly patient. The subsequent step then is to apply machine learning approaches to adaptively update these profiles and combine the critical information in them to support a single user's personalization needs in an ongoing way. We recently started exploring a research collaboration with Dr. Phil Sloan, currently the Director of Program on Aging, Disability, and Long-Term Care. Dr. Sloan and his team developed a knowledge resource, called the Alzheimer's Medical Advisor (http://alzmed.unc.edu/), for serving information needs of caregivers who support elderly patients suffering from dementia. Two key goals of the collaboration are: 1) Develop intelligent, multi-agent personalization techniques that combine the information needs of patients, caregivers, and physicians to improve the quality and effectiveness of the information delivered and 2) Create theoretical frameworks and models that can improve our understanding of collaborative search, particularly as it pertains to consumer-heath information seeking.

My interest in joining the workshop is to share some early conceptualizations of collaborative search challenges in the health domain, gather feedback and insights on possible directions for personalization approaches for collaborative search, and contribute toward continued development of the broader area of collaborative search through new initiatives, programs, and identification of funding sources.


Douglas W. Oard, University of Maryland

Title: Collaborative Cross-Language Search

In this talk, I will briefly review what little we know about collaborative Cross-Language Information Retrieval (CLIR), with an eye towards starting a discussion about a research agenda. To the usual list of talents that are distributed unevenly across a population of searchers (e.g., search strategies, domain expertise, and prior knowledge), collaborative CLIR adds language expertise. I will begin by reviewing a project on collaborative translation in which people with complementary language skills (one knowing the source language, the other the target language) worked together (with system assistance) to produce translations. I'll then briefly describe one session of a larger user study in which we sought to enhance recall, a challenge that is somewhat more difficult in CLIR than in monolingual settings, by reassigning low-yield topics to searchers who might bring different search strategies to bear. Drawing on this experience, I'll then step back to say a few words about what makes the CLIR setting different, and how those differences might help to inform the research agenda for collaborative information retrieval.

About the Speaker:

Douglas Oard is a Professor at the University of Maryland, College Park, with joint appointments in the College of Information Studies and the Institute for Advanced Computer Studies (UMIACS). Dr. Oard earned his Ph.D. in Electrical Engineering from the University of Maryland. His research interests center around the use of emerging technologies to support information seeking by end users. Additional information is available at http://terpconnect.umd.edu/~oard/.


Jeremy Pickens, Catalysts Inc.

Title: Task-Contrained Collaborative Information Seeking

In information seeking, when collaboration is implicitly moderated the range of design patterns around collaborative activities is limited. However, when collaboration is explicit, a wider range of possibilities opens up. When two or more searchers collaborate on a task, roles do not necessarily have to be symmetric. Well-known role asymmetries include different levels or types of expertise and different levels or types of search activity. However, it is often assumed that the information need itself is symmetrically shared, that all collaborators have equal stake in the task, even if their roles in supporting that need differ. For some domains, such as families deciding to make large purchases or friends traveling together, this need symmetry assumption is not unreasonable. However, in certain professional domains such as law, the needs themselves are not always jointly negotiable between collaborative partners. Instead, one collaborator will define the need and the remaining collaborators are charged with the task of supporting that need. Information needs may evolve as learning takes place over the course of a collaborative session, but due to the professional nature of that task, ultimately approval of that evolving need may only be approved by certain collaborators. In such domains, it is an open question as to how collaborative systems can best support team activity, a question that we hope to discuss at the workshop.


Soo Young Rieh, University of Michigan

Title: Evaluation Measures in Social Search

The Interactive Information Retrieval research has used a large number of evaluation measures including relevance, utility, efficiency, user satisfaction, and usability. In this talk, I propose to develop a set of evaluation measures to be used in social search that were derived from two empirical studies conducted in work places and social Q and A services. Four basic categories of measures will be discussed: performance measures, informational outcomes, social outcomes, and user experience. To measure performance, in addition to traditional criteria such as recall and precision, metrics for information diversity can be developed. The category of informational outcomes includes measuring information quality based on comprehensiveness, novelty, trustworthiness of information. The category of social outcomes refers to users' appreciations of other people's attempt, effort, responsiveness, and understanding of information needs. The category of user experience includes subjective assessment of overall search experience such as feeling of time was sell spent, increased certainty in the problem, and perceived learning of new information. The future directions of developing and testing evaluation measures in social search will be also discussed.


Chirag Shah, Rutgers University

Title: Social and Collaborative Information Seeking (SCIS): Space, Time, and Beyond

Space and time are considered to be the most defining characteristics of classifying various collaborative and social activities. I will talk about new knowledge concerning these two important dimensions when it comes to SCIS tasks and how we obtained it. Specifically, I will outline various lab and field studies we have conducted to learn about different trade-offs we observe in people working in co-located vs. remote conditions, and people working synchronously vs. asynchronously. In addition, I will talk about the roles of other important dimensions in SCIS that we have identified: communication, awareness, affects, and group sizes. I will also allude to the connections between S and C elements of SCIS.


Aiko Takazawa, UIUC

Title: Social and Collaborative Information Seeking

Seven Japanese women living in Finland became leaders of a self-organized humanitarian aid group in response to the 2011 Great Tohoku Earthquake and Tsunami disaster in Japan. The way this group managed to send bulks of baby formula from Finland to Japan is a fascinating case to study for holistic understanding of how people collaboratively search, use, and seek information in the use of available technologies. Since this group emerged in a natural setting mediated by social media without being guided through an established affiliation among participants or managed by an outside source, its emerging process of becoming and being a group provides deep insights into the substantive context for intertwined, various kinds of both individual and collaborative information activities. I claim that such messiness in the present case represents the reality of ordinary people living in this present ICT-mediated environment, although what the group ended up doing transcended the ordinary. From a broad perspective, this case demonstrates the potential for expanding existing concepts relevant to Social and Collaborative Information Seeking research by looking at its gradually constructed information needs, resulting from browsing in social context, serendipitous searching, and collaborative learning.

The present case's information activities are situated in its particular circumstances as those concerned individuals shared a vague aspiration and expressed with a strong compassion for offering meaningful aid directly and immediately to the victims of the 2011 national crisis in Japan. A loosely formed assemblage of likeminded individuals started conversing on social media spaces, and their "conversations" were carried out on different platforms involving both digital and physical spaces. Moreover, a number of other processing activities (or steps) were necessary in parallel in order to implement the idea of sending the baby formula, for example: the correspondence and reporting about the local transport (e.g. talking to FinnAir Cargo), the coordination of tasks and procedures, the fundraising and procurement of the baby formula, the preparation for the exportation of emergency food rations, and the packaging and loading work. How the TTJ evolved and completed six shipments of 12,000 cartons of formula in a timely manner draws an analogy of the reality of how information activities evolve in social contexts and becomes powerful; how people engage in collaborative information seeking and search even without knowing a particular need of information.

Using the publicly available data, particularly the group's Twitter activity and weblog, as well as interview data from few participants, I am currently trying to identify basic features that explain how indirect, opportunistic collaboration among likeminded individuals shaped as they worked on unstructured information tasks situated in social technologies. In this workshop, I would like to discuss how to make sense of social data drown from a microcosm case study. I want to understand how ordinary people, information, and technologies interact and how their intertwining social and collaborative information activities enable them to find a way that brings meaningful accomplishments.


Sandra Toze, Dalhousie University

Title: Exploring the Group within Social and Collaborative Search

Within social and collaborative search the "Group" represents a unique level with specific attributes (interaction, interdependence, awareness and shared understanding) that need to be better understood and supported. To address this gap my research has focussed on understanding and modelling information needs, seeking and use at the group level. For my dissertation research I collected longitudinal data from seven student groups working on multiple tasks over time, and used a structured task analysis to deconstruct when and how a group 1) identifies information needs 2) satisfies these needs through seeking through various channels, and 3) how a group collectively uses this found information to solve problems, make decisions and generate something new. To ensure motivated participants, the student groups I recruited completed class based assignments that were independent of my research and represented a significant part of their final class grade. Data collected included 60 hours of video and log files, from 25 different group sessions. This longitudinal data collected is unique within the field of group research. The naturalistic lab study method addressed a key methodological challenge of studying groups, allowing the complex details of group work to be captured as they unfolded naturally over time.

To guide my analysis I combined a rhythm based model of group task accomplishment (Marks, Mathieu, & Zaccaro, 2001) with an information behaviour lens (Choo, 2006; Marchionini, 1995; Wilson, 1999) to create an integrative framework. I first analyzed the procedural aspects of group work and found that all the groups shifted between three phases of group activities: Planning, Doing, and Monitoring. Within each phase I then identified and described the elements of a group information process: the information tasks, information task goals, information activities, sources, tools, artefacts, roles and shifts in participation. Groups looked for information to satisfy eight different goals, requiring 19 different information activities, as well as specific sources and tools to generate new artefacts. Ten roles were observed within the groups to manage their information activities, and participation fluctuated from individual through to the group. The relationship between these elements was described. Integrative analysis revealed that the student groups did not have good mechanisms for managing information needs, and encountered the greatest difficulties trying to use information collectively. Additionally challenges when searching together were identified.

Based on the findings of my research I made recommendations for tools and processes to facilitate more effective group work. My definition and conceptual model of Group Information Process extends our understanding of information behaviour within groups, and provides a base that can be used to ground further research. In addition, and of particular interest for this workshop is the richness of my data, the process of analyzing complex group data, and the natural lab methodology. These represent new avenues for moving the research agenda forward. Currently I am investigating collaborative information use in groups by analyzing key moments from the videos using conversation analysis to examine the relationship between the information activities and the formation of shared understanding.


Michael Twidale, University of Illinois Urbana-Champaign

Title: Searching for help: how learning technologies involves collaborative search

As computational and informational resources become ever more abundant, we see changes in the way people learn how to use them, adopt, adapt, appropriate, tinker, tailor, combine and modify them. Examples include software developers who search as they code, and data scientists going online to get ideas for how best to clean, combine and manipulate datasets. However such activities are not restricted to the computational elites. Across all levels, tech learning is often both a search and also a social activity, synchronous and asynchronous, co- located and remote, with colleagues and strangers.

Doing this kind of searching as part of technology learning and problem solving accentuates particular difficulties in the search process. Various strategies and tactics can dramatically improve efficiency, and equally a lack of certain skills the possession of certain misconceptions can degrade people's ability to learn and cope, and even lead them to self-define as "not- techie". This raises important implications for design, policy and education.


Jyothi Vinjumur, University of Maryland

Title: Reducing E-Discovery Cost with Collaborative Review Process

Seeking relevant information and yet protecting sensitive content that could be intermixed with relevant information are two different goals, but in certain situations, a balance must be struck between the two. One example is, the protection of content that is subject to a claim of attorney-client privilege when sharing responsive evidence incident to civil litigation, a process called e- discovery. In e-discovery, the use of automated retrieval techniques to retrieve responsive evidence have not brought the hoped-for cost savings, since attorneys are reluctant to trust automated methods for privilege review (example: attorney- client privilege), and therefore frequently advocate on manual review on the responsive set. Such manual assessments of privilege require expert legal knowledge causing the review procedure to be very expensive. Thus, the main question in e-discovery is not just about what the technology is able to do or how legal professionals use what we build, but also about how the legal professionals and the technology could collaborate to ensure proper production at a proportionate cost. Although legal professionals have been quick to embrace technology supported retrieval and review techniques, the increasing volume of potential electronic evidence that must be reviewed is just overwhelming. Many factors like context, cognition, annotator expertise, etc., affect the process and quality of the review process. In privilege review task, the nature of the content to be protected can be generally unknown to the reviewer in advance or different reviewers may share different opinions. In addition to the review complexity, it is not realistic for human reviewers to be infallible. The intuition in this paper is that, review process could be effective in time and performance as a collaborative task than as a solitary activity. Thus, this paper aims to build an interactive web- based system to corroborate the manual review process by facilitating explicit collaboration among annotators to review huge masses of electronic evidence with the goal to optimize for e-discovery cost (both review and training). In order to arrive at high quality and cost effective relevance/privilege assessments, this paper proposes a first step of building a Collaborative Technology-Assisted Review (CTAR) tool that can support lawyers to make faster and more accurate judgments during review.


Ellen M. Voorhees, National Institute of Standards and Technology

Title: Evaluating Systems that Support SCIS

I am the lead for the Text REtrieval Conference (TREC) project, a series of workshops that build the infrastructure necessary for large-scale evaluation of information access systems. In this role I participate in the development of evaluation methodologies for information processing tasks and perform research that focuses on understanding the benefits and limitations of those methodologies (for example, see [1, 2, 3]). I am interested in contributing to the roadmapping effort for SCIS.

References

[1] E. M. Voorhees. Variations in relevance judgments and the measurement of retrieval effective-
ness. Information Processing & Management, 36(5):697716, 2000.

[2] E. M. Voorhees. The philosophy of information retrieval evaluation. In Evaluation of Cross-
language Information Retrieval Systems, pages 355370. Springer, 2002.

[3] E. M. Voorhees. On test collections for adaptive information retrieval. Information Processing
& Management, 44(6):18791885, 2008.

Hassan Zamir, University of South Carolina

Title: Social Searching and Information Recommendation Systems

I am strongly interested to participate at the Social and Collaborative Information Seeking workshop to learn more about how social and collaborative relationships are useful in information searching. Currently, I am at the early stage of writing my doctoral dissertation that concentrates on usefulness of social media data to recommend right information contents to the right users at right time. An active practicum on theories, models, techniques and usefulness of social and collaborative search will be very valuable to plan and select appropriate methodologies for my research. Apart from my dissertation research, I usually conduct investigations in the areas related with social media information retrieval.

Omnipresence of social media tools enable people to produce and reproduce contents instantly and share those with the world almost effortlessly. It makes the task of information seeking more complex and challenging specially to retrieve relevant items. Online social media users actively use various web 2.0 tools to report social crisis and events, protests, occurrences, natural disasters, political debates, policy dialogs and many more. To ensure quick information retrieval, social media sites adopt user-friendly mechanisms such as tagging, organizing topics by trends, searching, categorizing contents by general interest based subjects etc. However, information gathering is a combined task that requires explicit and implicit collaboration and participation of others. Social media tools harness the power of crowds purposefully, which makes the task of information recommendation easier and convenient. I have interests to develop systems that can recommend information contents to the users based on information available on social platforms. Companies like Amazon, Netflix, Pandora etc. are common examples that are widely using recommendation methods based on collaborative filtering. Application of recommendations methods have potentials to fit in the libraries as well, although patron privacy issues need to be handled carefully. General distance computing algorithms such as Manhattan distance, Euclidean distance, Pearson correlation coefficient etc. will be helpful to identify similar books or information resources and suggest those to library patrons. Similar data mining techniques evidently work well with online social media data. Currently I am conducting my dissertation research that focuses on social movements tweets with a purpose to examine how Twitter can suggest information contents to tweeters. The role of explicit and implicit filtering in this context needs to be investigated as well. In the case of Twitter, 'favorite', 'retweet', and 'follow' information can be utilized to observe explicit behavior of tweeters. Furthermore, implicit users' behaviors including click through, eye-tracking, information generating and sharing behavior etc. can potentially recognize personalized information preferences. Eventually, this technique has implications on grouping and referring information contents to the users.


Yinglong Zhang, The University of Texas at Austin

Title: Culture and Trust in Collaborative Information Seeking

Why do people need collaborative searches? One of the common motivations for individuals to use collaborative search is that people trust their friends more than strangers. In my prior research, it has been found that people are prone to judge information as irrelevant and refuse to use it when they consider it is unreliable. It seems that the success of collaborative work is largely based on whether members can trust each other in a collaborative group. Culture is one of the many important factors that heavily influence development of trust in collaboration. Individuals from different cultures can interpret and understand the same problem in a distinct way based on their own cultural knowledge and beliefs. The gaps in problem representation can increase misunderstanding and conflicts in collaboration, therefore weakening trust among the individuals. Aiming to address this issue in a context of collaborative information seeking, I am interested in investigating what factors will contribute to the development of trust in an intercultural group and how to design collaborative search systems that can make people trust each other more. Influenced by theories and methods from Human-Computer Interaction as well as cognitive science, I seek to adopt quantitative and qualitative methods to address questions of interest.


Previous: Program
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on May 13, 2015.