Bulletin, February/March 2006

Plenary Session II
Just-in Time-Information: Is It in Your Future?

by Irene Travis

Irene Travis is the editor of the Bulletin of the American Society for Information Science and Technology. She can be reached at bulletin<at>asis.org.

Just-in-time information is “proactively offering information to a user that is highly relevant to what s/he is currently focused on.” Pattie Maes of the MIT Media Laboratory’s Ambient Intelligence Group offered this definition of her work in an address to the second plenary session at the ASIS&T Annual Meeting in Charlotte , November 3, 2005. Over the past 10 years her projects at the Media Lab have investigated just-in-time information support both for the desktop and for people on the move.

In the desktop environment Maes has explored how we can give users information pertinent to whatever they are doing in an application. Similarly, for people on the move she asks: How might we give people relevant information about others they are meeting for the first time? How could we give tourists location-specific information about things of interest around them, such as restaurants, shops or points of interest? In short, the purpose of just-in-time information is to promote “insight, inspiration and interpersonal connections” without interrupting the user’s activities. The goal is something less disruptive, Maes explained, than “Please excuse me a moment while I Google you.”

How to Offer Just-in-time Information

Just-in-time information systems, Maes noted, must model user interests/preferences, sense the current context of the user, compute information relevant to the context and user profile (recommendation algorithm) and present information in subtle, non-intrusive ways. The early systems for the desktop had functions such as recommending, remembering, mentoring and match-making, while in later systems information is triggered by such factors as location or objects with embedded electronic identification that are being handled or looked at.

For example, the ReachMedia Project, carried out by Assaf Feldman, Sajid Sadi in 2005, explores on-the-move interaction with augmented objects. The user wears a wristband that contains a wireless radio frequency identification (RFID reader) such as those used to track packages. The wristband reads RF tags in objects the user holds. Touching an object results in a menu of services and information being displayed on the screen of a smart phone. For example, picking up a tagged book on a friend’s coffee table connects your smart phone to a server and lets the server know that you picked up that book. The server can then proactively look for and transmit potentially useful and personalized information, such as reviews, to you. With a hands-free option, the system can offer information in auditory form, so you can listen to information and review services about the book while you are holding it. Under the system you could also navigate through opportunities by using gestures processed by accelerometers on the wristband, including navigating menus with gestures. 

Maes included many other examples in her talk of both desktop and on-the-move applications. Brief descriptions of them are available at http://interact.media.mit.edu.

Technical Challenges

Maes next reviewed the many technical challenges of just-in-time information provision. It will work, she said, if the information is likely to be relevant to the user, but the challenges are user profiling, detecting user context and recommendation algorithms. Likewise, it will work if it is offered unobtrusively, which requires subtle interfaces. Finally it must also provide minimum user effort to access, which requires natural “on-the-move” interfaces.

Looking at these challenges individually, user modeling may be done by having the user give the information to the system explicitly, by the system gathering it implicitly by data mining the users’ observed behavior or their personal texts, such as homepages – or by some combination of methods.

Detecting context – the who, what, where and when of the user’s situation – is approached differently on the desktop and the physical environment. On the desktop the program can sense actions in different applications, but in the physical world, offline, the system must rely on sensors in the environment or on the user. It may also require background knowledge and inferencing to differentiate, for example, the information that would be useful when you shake the hand of a new acquaintance for the first time (creating connections, breaking the ice) as opposed to when you shake hands with someone you know well (circumstances and subjects of your last conversations).

Recommender systems can be created using a range of approaches including those based on case/prototypes, features of the content (patterns in content), collaborative filtering patterns among users).

Finally, subtle, natural interfaces require either secondary input/output modalities for the user, such as peripheral vision or audio, or they require seamless integration in an existing interface, such as cell phones, in a minimal way. The goals are always to avoid the user having to change focus or be interrupted, to have the recommendations be proactive but easily ignorable, to avoid additional gear/devices/windows and to support “on-the-move” access to details.  In addition such interfaces should offer “ramping” – presenting minimal “hints” that a user can ignore while also allowing users to control access to more detail.

Other user interface lessons learned, Maes concluded, include that transparency is key since it leads to trust, that systems must avoid making people dependent on them or producing ”tunnel vision” and all systems must protect the user’s privacy despite being based on user profiles.

In summary, Maes stated, the goal of her research group is to rethink user-information interaction by proactively offering just-in-time information, highly relevant to unique users and their current focus of attention in a non-disruptive, easily accessible way.

Audience members raised several questions. Regarding the potential to teach students about the ethical implications of their work, Maes noted that though there are no classes devoted to the topic, ethics is part of the conversation about her projects and the technology. Responding to a question about security, Maes noted that decentralized information is essential.

On system dependence, she stressed helping people to find information – as opposed to finding it for them – and integrating recommendations into systems that the user already uses. For example, her group tried to build a personalized newspaper – but it creates a huge tunnel vision problem. Instead one would want a newspaper that is augmented.  The paper is the same for everyone, but includes an individualized highlighter for every person.

In response to a final question on scalability, Maes observed that they didn’t think much about it in a university laboratory environment, but she did comment on how surprising it is what can be done with the equipment people already have, particularly cell phones.

Pattie Maes may be reached at pattie<at>media.mit.edu.