Smart Technologies in Enhancing Browsing Experiences

ODSC - Open Data Science
3 min readMar 3, 2020

Information search is an activity that involves various techniques and methods for finding new data and insights. Physical and digital spaces as different contexts provide unique advantages for search activities for information seekers: the physical environment provides spatial layout and interaction with tangible objects, while online information systems support browsing and knowledge discovery.

The inconsistency between information search tasks and their interfaces challenges the role of the physical places. Compared to physical collections where one object is located in a single place, exploring digital spaces allows items to be inspected from various locations. Furthermore, a digital space offers an opportunity to demonstrate multiple relationships between artifacts. Some information search tasks are more easily performed in physical environments; however, digital space provides support for more efficient information retrieval.

Browsing as an approach designed for exploration rather than search takes place online. Therefore, a few teams at Harvard and MIT have been working on designing new ways of communications with interfaces (e.g. LUI for YouTube) or new additions to complement the online browsing (e.g. LeapVIS). Such web-interfaces provide custom frameworks of free-handed gestures and voice to control the content. But what happens with browsing activity in physical spaces? According to tremendous domain research efforts, digital arenas allow items to be inspected from various locations, unlike with physical spaces where collections of objects are located in a single place.

Context-driven visualizations have been used to perform mainly goal-oriented search tasks (e.g. wayfinding). Exploring a physical space allows users to choose different paths in a large object collection experiencing how subsets of objects create different visual expressions. However, in a physical environment users tend to examine space and interact with objects more frequently, being more attracted to an actual context. Browsing is an important online activity, but the action fails to reflect given physical context information.

Therefore, the challenge is to create applications that benefit the advantages of both the physical and digital worlds. Naturally, using augmented reality (AR) or mobile devices could bring information in physical spaces. Some examples like AR-based visual systems for libraries (Hololibrary) or cell-phone based visual companions for bookstores (BookVIS) benefit from immediate and complex information retrieval in a physical environment. Projections and screens that support free-handed gestures could enhance browsing experiences, but without being deployed on a heads-up display, they suffer the lack of spatial intuition. Spatially aware systems provide different in-situ information related to the object and the user, adjusting the level of presentation detail to the information seeker. On the other hand, digital realities lack tactility, but combined with real objects could compensate for such missing components, while enabling immediate visualizations of object-related data.

Spatially aware AR systems provide more information about a limited set of objects or actions. Online systems have been created to support better and intuitive browsing, but they are too complex for users who still prefer to wander through aisles and browse spaces (e.g. bookshelves). By using all the available data with gaze-based interactions web-based AR applications could improve every-day tasks, enrich users’ experience, promote new ways of interacting with objects as well as between humans.

All previously mentioned systems aim to bridge the gap between physical and digital arenas, using digital data associated with physically situated objects and transforming and visualizing this data in relation to a given context. Using cell-phones or AR headsets, applications generate the object-related data or visual dashboard for further exploration. With such systems and its interplay between real and digital worlds, new avenues could be opened for creating new dimensions and adaptive visualizations.

About the Author/Speaker: Zona Kostic is a research, teaching, and innovation fellow at Harvard University. Her professional interests are at the intersection of machine learning, data visualization, and digital realities. She is also the co-founder of Archspike, a building design optimization platform for use by real estate developers.
Website: https://www.zonakostic.com/
Linkedin: https://www.linkedin.com/in/zonakostic/

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.