Publication Types:

Development of a mobile personal health guide for HIV-infected African American MSM

Conference paper
Lee, S., Lu, Y., Chakraborty, A., Dworkin, M.
Intelligent Virtual Agents: Proceedings of the 16th International Conference (LNCS) (Vol. 10011, pp. 447–450)
Publication year: 2016

Abstract

The study presented in this paper focuses on a relational, educational, and motivational virtual human (VH) mobile phone application for African American men who have sex with men (AAMSM). This project targets on increasing adherence to HIV medication and on improving the proportion of HIV-infected persons engaged in care by developing a theory-based mobile phone intervention that engages young HIV-positive AAMSM. The intervention is aimed to improve the likelihood of compliance with healthy behavior leading to both patient benefits (decreased morbidity, mortality and resistant virus) and population benefits (decreased HIV transmission). The VH encourages interaction with information and functions that promote engagement with the HIV Care Continuum, provide fundamental HIV information, present motivating statements, facilitate interaction with healthcare, visualize laboratory results, and encourage, explain, and illustrate relevant behavioral skills.

Keywords

  • Mobile virtual human
  • Health guide
  • HIV Care
  • Adherence

Emotionally augmented storytelling agent - the effects of dimensional emotion modeling for agent behavior control

Conference paper
Lee, S., Johnson, A., Leigh, J., Renambot, L., Jones, S., Di Eugenio, B.
Intelligent Virtual Agents: Proceedings of the 15th International Conference (LNCS) (Vol. 9238, pp. 483-487)
Publication year: 2015

The study presented in this paper focuses on a dimensional theory to augment agent nonverbal behavior including emotional facial expression and head gestures to evaluate subtle differences in fine-grained conditions in the context of emotional storytelling. The result of a user study in which participants rated perceived naturalness for seven different conditions showed significantly higher preference for the augmented facial expression whereas the head gesture model received mixed ratings: significant preference in high arousal cases (happy) but not significant in low arousal cases (sad).

A virtual patient to assess pediatric intensive care unit (PICU) nurses’ pain assessment and intervention practices

Conference paper
Lee, S., LaFond, C., Johnson, A., Vincent, C., Leigh, J., Renambot, L.
Intelligent Virtual Agents: Proceedings of the 13th International Conference (LNCS) (Vol. 8108, pp. 159–167)
Publication year: 2013

Abstract

Pediatric intensive care (PICU) nurses play a crucial role in managing children’s pain. While virtual patient (VP) technology has shown significant benefits in assisting with the practice of healthcare, there has been little research effort in this particular setting. This paper presents a pilot evaluation study to determine the validity of VP vignettes – including PICU nurses’ recognition of the facial expressions (smiling and grimacing) of VPs and nurses’ descriptions as to whether the VP vignettes are consistent with their professional experiences. The results of our initial study (n=20) confirm that nurses identified given expressions correctly (98.5%) and validated the similarity of the vignettes to their experiences with patients in the PICU (87%). A better understanding of nurses’ pain management practices will aid the development of future VP interventions.

Keywords

  • Virtual Patient Vignette
  • Pain Assessment
  • Intervention Practice
  • Pediatric

Performance evaluation of incremental vector clocks

Conference paper
Lee, S., Kshemkalyani, A.D., Shen, M.
Parallel and Distributed Computing: Proceedings of the 10th International Symposium
Publication year: 2011

Abstract

The vector clock is an important mechanism to track logical time and causality in distributed systems. Vector clocks incur an overhead of n integers on each message, where n is the number of processes in the system. The incremental vector clock technique attaches only the changed components of the vector clock to a message. This technique to reduce the size of the message overhead is popularly used. We evaluate the performance of the incremental vector clock technique via extensive simulations under a wide range of network loads and communication patterns. Our simulations confirm the intuition that this technique shows marked gains when application processes communicate with locality patterns. In addition, the simulations revealed the following behaviour: (i) the message overhead is not much dependent on the number of processes, (ii) a higher multicast frequency, as opposed to unicasting, lowers the message overhead, and (iii) a slower network speed relative to the inter-message generation time lowers the message overhead.

Designing an expressive avatar of a real person.

Conference paper
Lee, S., Carlson, G., Jones, S., Johnson, A., Leigh, J., Renambot, L.
Intelligent Virtual Agents: Proceedings of the 10th International Conference (LNCS) (Vol. 6356, pp. 64–76)
Publication year: 2010

Abstract

The human ability to express and recognize emotions plays an important role in face-to-face communication, and as technology advances it will be increasingly important for computer-generated avatars to be similarly expressive. In this paper, we present the detailed development process for the Lifelike Responsive Avatar Framework (LRAF) and a prototype application for modeling a specific individual to analyze the effectiveness of expressive avatars. In particular, the goals of our pilot study (n = 1,744) are to determine whether the specific avatar being developed is capable of conveying emotional states (Ekmanös six classic emotions) via facial features and whether a realistic avatar is an appropriate vehicle for conveying the emotional states accompanying spoken information. The results of this study show that happiness and sadness are correctly identified with a high degree of accuracy while the other four emotional states show mixed results.

Case study - Designing an advanced visualization system for geological core drilling expeditions

Conference paper
Chen, Y., Hur, H., Lee, S., Leigh, J., Johnson, A., Renambot, L.
Human Factors in Computing Systems: Proceedings of the 28th ACM Conference (pp. 4645-4660)
Publication year: 2010

Abstract

We present the design and process of an interactive high-resolution visualization system for diverse and distributed real-world geological core drilling expeditions. The high domain knowledge barrier makes it difficult for a person who is outside this field to imagine the user experience, and the globally distributed core drilling community imposes more design constraints in space and time. In addition to activities proposed in prior literatures, we used the “immersive empathic design” approach of having a computer scientist trained as a junior core technician. Through in-situ observation and interview evaluations from on-going expeditions, we present the system and the lesson learned in the process. It makes the best use of precious co-located opportunities. It allows the developer to build up domain knowledge efficiently. It establishes a trust relationship between the developer and scientists. The system designed through this approach formed a sustainable foundation that was adapted in the following design iterations. This process allows the software developer to experience authentic user activities. The designed system is innovative and helps scientists solving real-world problems. This approach can be a useful example to HCI practitioners who work with potential users or communities that share similar properties.

A semi-automated model for translating natural language queries into meaningful visualizations

Conference paper
Sun, Y., Leigh, J., Johnson, A., Lee, S.
Smart Graphics: Proceedings of the 10th International Symposium (LNCS) (Vol. 6133, pp. 184-195)
Publication year: 2010

Abstract

While many visualization tools exist that offer sophisticated functions for charting complex data, they still expect users to possess a high degree of expertise in wielding the tools to create an effective visualization. This paper presents Articulate, an attempt at a semi-automated visual analytic model that is guided by a conversational user interface to allow users to verbally describe and then manipulate what they want to see. We use natural language processing and machine learning methods to translate the imprecise sentences into explicit expressions, and then apply a heuristic graph generation algorithm to create a suitable visualization. The goal is to relieve the user of the burden of having to learn a complex user-interface in order to craft a visualization.

Keywords

  • Visual analytics
  • Natural language processing
  • Conversational interface
  • Automatic visualization

Design an interactive visualization system for core drilling expeditions using immersive empathic method

Conference paper
Chen, Y., Lee, S., Hur, H., Leigh, J., Johnson, A., Renambot, L.
Human Factors in Computing Systems: Proceedings of the 27th ACM Conference (pp. 2671-2674)
Publication year: 2009

Abstract

In this paper, we propose an immersive empathic design method and used it to create an interactive high-resolution core visualization system for real-world geological core drilling expeditions. A high domain knowledge barrier makes it difficult for a person from outside this field to imagine the user experience simply through observation. The globally distributed nature of the core drilling community imposes further design constraints. We used this approach to embed a computer scientist trained as a junior core technician. This process allowed the developer to experience authentic user activities and enabled the design of an innovative system for solving real-world problems. This approach made the best use of precious co-located opportunities, overcame the initial domain knowledge barrier, and established a trust relationship between the developer and the domain scientists. The system designed through this approach formed a sustainable and adaptive foundation that the domain scientists can build on. Through in-situ deployment, observation and interview evaluations from on-going expeditions, we present the advantages of this process.

Towards interactive training with an avatar-based human-computer interface

Conference paper
DeMara, R., Gonzalez, A., Hung, V., Leon-Barth, C., Dookhoo, R., Jones, S., Johnson, A., Leigh, J., Renambot, L., Lee, S., Carlson, G.
Industry Training, Simulation & Education Conference (I/ITSEC) 2008
Publication year: 2008

Abstract

The development of avatars has significant potential to enhance realism, automation capability, and effectiveness across a variety of training environments. Project Lifelike is a three-year National Science Foundation effort whose objective is to develop and evaluate realistic avatar interfaces as portals to intelligent programs capable of relaying knowledge and training skills. This interface aims towards support of spoken dialog within a limited domain and capabilities for learning to maintain its knowledge current and accurate. Research objectives focus on the integration of speaker-independent continuous speech recognition technology with a context-based dialog system and real-time graphics rendering capability derived from live subject motion capture traces. The motion capture traces are used by the avatar to provide spoken interaction with gestural expressions. This paper describes the first phase of the Lifelike project which developed an interactive avatar prototype of a National Science Foundation (NSF) program manager, Dr. Alex Schwarzkopf, for whom a contextual graph representation of domain knowledge was created. A Graphical Asset Production Pipeline was developed to allow digitization of the facial characteristics and physical movements of Dr. Schwarzkopf. Next, an example subset of his knowledge of NSF protocols was encoded in a grammar-based speech interpretation system and context-based reasoning system. These systems were integrated with the Lifelike Responsive Avatar Framework to enable the avatar to receive spoken input and generate appropriate verbal and non-verbal responses. The system demonstrates conveyance of knowledge within a limited domain such as NSF project reporting requirements. Work toward improving the realism of the avatar, long-term efforts toward creating a toolbox for generalization to other training applications, and results of evaluation of how users respond to different characteristics that contribute to realism in an avatar are discussed.