Filter by type:

(Re)creating the posture portraits: Artistic and technological (re)productions of the gendered (re)presentations of bodies at institutions of higher education

Journal paper
Baldwin, N. A., Henderson, H., Lee, S.
Body Studies Journal
Publication year: 2020

Abstract

This article explores the connection of the posture portraits taken at colleges and universities as a part of modern physical education programs in the United States, in relation to the eugenics movement of the late-nineteenth and early-twentieth centuries, and the underlying assumptions of what it means to be “normal.” We examine how the reification of ideologies around health and discipline using a medicalized gaze and photographic technologies led American scientists and academics of this period to pathologize gendered, racialized people, and people with disabilities as “Other.”

In this research, we engaged in a recreation of this narrative of pathology and shame by using photography and the arts — specifically dance — to refocus the gaze as well as to critique the academic discourse at the time by celebrating the body and the participatory process of creating knowledge. More specifically, in our recreation, we focused on how transparency, agency and choice render bodies as interactive “performers” and how by engaging thoughtfully posed movements, we can make art from data, from history, and from a process of de-literalizing and de-othering.

Keywords

  • Art, Eugenics, Gender, Healthism, Modernism, Physical Education, Postmodernism, Posture, Science

Acceptability, feasibility, and preliminary efficacy of a theory-based relational embodied conversational agent mobile phone intervention to promote HIV medication adherence in young HIV-positive African American MSM

Journal paper
Dworkin, M. S., Lee, S., Chakraborty, A., Monahan, C., Hightow-Weidman, L., Garofalo, R., Qato, D. M., Liu, L. C., Jimenez, A.
Journal of AIDS Education and Prevention, 31(1)
Publication year: 2019

Abstract

An embodied conversational agent can serve as a relational agent and provide information, motivation, and behavioral skills. To evaluate the feasibility, acceptability, and preliminary efficacy of My Personal Health Guide, a theory-based mobile-delivered embodied conversational agent intervention to improve adherence to antiretroviral therapy in young African American men who have sex with men, we conducted this prospective pilot study using a 3-month pre–post design. Outcome measures included adherence, acceptability, feasibility, pre versus post health literacy, and pre versus post self-efficacy. There were 43 participants. Pill count adherence > 80% improved from 62% at baseline to 88% at follow-up (p = .05). The acceptability of the app was high. Feasibility issues identified included loss of usage data from unplanned participant app deletion. Health literacy improved whereas self-efficacy was high at baseline and follow-up. This pilot study of My Personal Health Guide demonstrated acceptability and preliminary efficacy in improving adherence in this important population.

Keywords

  • Adherence
  • mHealth
  • HIV
  • African American MSM
  • Avatar

(Re)creating the posture portraits: Artistic and technological (re)productions of the gendered (re)presentations of bodies at institutions of higher education

Conference workshop
Baldwin, A., Henderson, H., Lee, S.
“TechnoLogics: Power and Resistance”, Gender, Bodies & Technology 5th Transdisciplinary Conference, April 25-27, 2019, Roanoke VA, USA
Publication year: 2019

A realistic talking human embodied agent mobile phone intervention to promote HIV medication adherence and retention in care in young HIV-positive African American men who have sex with men: qualitative study.

Journal paper
Dworkin, M. S., Chakraborty, A., Lee, S., Monahan, C., Hightow-Weidman, L., Garofalo, R., Qato, D. M., Liu, L. C., Jimenez, A.
Journal of Medial Internet Research mHealth and uHealth, 6(7).
Publication year: 2018

Abstract

BackgroundAvatars and embodied agents are a promising innovation for human health intervention because they may serve as a relational agent that might augment user engagement in a behavioral change intervention and motivate behavioral change such as antiretroviral adherence and retention in care.

ObjectiveThis study aimed to develop a theory-driven talking avatar-like embodied agent mobile phone intervention guided by the information-motivation-behavioral skills model to promote HIV medication adherence and retention in care in young African American men who have sex with men (MSM).

MethodsWe performed 5 iterative focus groups in Chicago with HIV-positive African American MSM aged 18-34 years to inform the ongoing development of a mobile phone app. Participants for the focus groups were recruited from 4 University of Illinois at Chicago Community Outreach Intervention Project sites located in different high HIV incidence areas of the city and the University of Illinois at Chicago HIV clinic using fliers and word of mouth. The focus group data analysis included developing an ongoing list of priorities for app changes and discussion between two of the investigators based on the project timeline, resources, and to what extent they served the app’s objectives.

ResultsIn this study, 16 men participated, including 3 who participated in two groups. The acceptability for an embodied agent app was universal in all 5 focus groups. The app included the embodied agent response to questions and antiretroviral regimen information, adherence tracking, CD4 count and viral load tracking, motivational spoken messages, and customizability. Concerns that were identified and responded to in the development process included privacy, stigma, avoiding the harsh or commanding tone of voice, avoiding negative motivational statements, and making reminder functions for a variety of health care interactions.

ConclusionsAn avatar-like embodied agent mHealth approach was acceptable to young HIV-positive African American MSM. Its relational nature may make it an effective method of informing, motivating, and promoting health behavioral skills. Furthermore, the app’s ease of access, stigma-free environment, and audiovisual format may help overcome some adherence barriers reported in this population.

 

Keywords

  • adherence;
  • mHealth;
  • HIV;
  • African American;
  • men who have sex with men;
  • avatar, embodied agent

AI in informal science education: Bringing Turing back to life to perform the Turing Test.

Journal paper
Gonzalez, A. J., Hollister, J. R., DeMara, R. F., Leigh, J., Lanman, B., Lee, S., Parker, S., Walls, C., Parker, J., Wong, J., Barham, C., Wilder, B.
Journal of Artificial Intelligence in Education, 27(2)
Publication year: 2017

Abstract

This paper describes an interactive museum exhibit featuring an avatar of Alan Turing that informs museum visitors about artificial intelligence and Turing’s seminal Turing Test for machine intelligence. The objective of the exhibit is to engage and motivate visiting children in the hope of sparking an interest in them about computer science and artificial intelligence, and cause them to consider pursuing future studies and/or careers in these fields. The exhibit interacts with the visitors, allowing them to participate in a simplified version of Turing’s test that is brief and informal to suit the limitations of a five-minute exhibit. In this exhibit, the visitor (targeted towards middle school age children) invokes an avatar of his/her own choice, and acts to endow it with human-like qualities (voice, brain, eyesight and hearing). Then, the visitor engages the avatar in a (brief) question-and-answer session to determine whether the visitor thinks that he/she is interacting with a real human on a video conference or with an avatar. We consider this interaction to be an extension of the original Turing Test because, unlike Turing’s original that used text via a teletype, this version features a graphical embodiment of an agent with which one can converse in spoken natural language. This extension serves to make passing the Turing Test more difficult, as now the avatar must not only communicate like a human, but also look, sound and act the part. It also makes the exhibit visual, dynamic and interesting to the visitors. Evaluations were performed with museum visitors, both in backrooms with prototypes as well as on the museum floor with the final version. The formative and summative evaluations performed indicated overall success in engaging the museum visitors and increasing their interest in computer science. More specifically, the formative testing, mostly done in quiet back rooms with selected test subjects, indicated that on the important questions about enjoyment of exhibit and increased interest in computer science by the test subjects, their self-reported Likert scale responses (1 being negative and 5 being positive) increased from 3.16 in the first evaluation to 4.38 in the third one for increased interest in CS. Likewise for the question about exhibit enjoyment (from 3.92 to 4.56). The summative evaluation, done through unobtrusive observation of exhibit use on museum floor, indicated that almost 74% of the parties that initiated the exhibit were either highly or moderately engaged by the exhibit. However, there was one major negative finding, namely the overly long duration of the exhibit, which may have caused premature abandonment of the exhibit in several cases during the summative evaluation. These tests and their results are presented and discussed in detail in this paper. The exhibit has been on permanent display at the Orlando (FL) Science Center since June 2014 and has received a strongly positive response from visitors since that time.

Keywords

  • Artificial virtual humans
  • Avatars
  • Embodied conversational agents
  • Informal science education
  • Science museum exhibit
  • Turing test

Development of a mobile personal health guide for HIV-infected African American MSM

Conference paper
Lee, S., Lu, Y., Chakraborty, A., Dworkin, M.
Intelligent Virtual Agents: Proceedings of the 16th International Conference (LNCS) (Vol. 10011, pp. 447–450)
Publication year: 2016

Abstract

The study presented in this paper focuses on a relational, educational, and motivational virtual human (VH) mobile phone application for African American men who have sex with men (AAMSM). This project targets on increasing adherence to HIV medication and on improving the proportion of HIV-infected persons engaged in care by developing a theory-based mobile phone intervention that engages young HIV-positive AAMSM. The intervention is aimed to improve the likelihood of compliance with healthy behavior leading to both patient benefits (decreased morbidity, mortality and resistant virus) and population benefits (decreased HIV transmission). The VH encourages interaction with information and functions that promote engagement with the HIV Care Continuum, provide fundamental HIV information, present motivating statements, facilitate interaction with healthcare, visualize laboratory results, and encourage, explain, and illustrate relevant behavioral skills.

Keywords

  • Mobile virtual human
  • Health guide
  • HIV Care
  • Adherence

Emotionally augmented storytelling agent - the effects of dimensional emotion modeling for agent behavior control

Conference paper
Lee, S., Johnson, A., Leigh, J., Renambot, L., Jones, S., Di Eugenio, B.
Intelligent Virtual Agents: Proceedings of the 15th International Conference (LNCS) (Vol. 9238, pp. 483-487)
Publication year: 2015

The study presented in this paper focuses on a dimensional theory to augment agent nonverbal behavior including emotional facial expression and head gestures to evaluate subtle differences in fine-grained conditions in the context of emotional storytelling. The result of a user study in which participants rated perceived naturalness for seven different conditions showed significantly higher preference for the augmented facial expression whereas the head gesture model received mixed ratings: significant preference in high arousal cases (happy) but not significant in low arousal cases (sad).

Development and validation of a virtual human vignette to compare nurses’ assessment and intervention choices for pain in critically ill children

Journal paper
LaFond, C. M., Vincent, C. V., Lee, S., Corte, C., Hershberger, P.E., Johnson, A., Park, C. G., & Wilkie, D. J.
Journal of the Society for Simulation in Healthcare, 10(1), 14-20.
Publication year: 2014

Abstract

Introduction
As virtual experiences are increasingly used in healthcare training and research, it is important that adequate processes are applied for developing valid scenarios. We describe the development and validation of virtual human (VH) vignettes, computer-generated scenarios with animated patients and clinical information, for a mixed-methods study regarding nurses’ assessment and intervention choices for critically-ill children’s pain.

Methods
We followed the Case Development and Review Process for High-Fidelity Simulation Case Scenarios, including use of validated written vignettes and content experts. Forty nurses described their pain assessment and intervention choices for the newly derived VH vignettes and completed a pain questionnaire. Nurses’ reports of VH vignette consistency with their professional experience and recognition of VH facial expressions were evaluated to establish face validity. Their pain ratings for the VH and written (questionnaire) vignettes were evaluated for convergent validity. Qualitative content analysis, descriptive statistics, correlations, and paired t-tests were employed.

Results
Most nurses (68.4%) supported vignette consistency with their professional experience. Facial expression recognition was 98.4%. Smiling children’s pain was rated significantly lower than grimacing children in both VH and written vignettes. Pain was rated significantly lower for grimacing children in the VH vignettes than the written vignettes. VH vignette pain ratings were strongly correlated with their written counterparts.

Conclusions
This process was effective for developing VH vignettes that demonstrated good face validity with participants and convergent validity with written vignettes. VH vignettes may be useful in studying the influence of facial actions on nurses’ choices for children’s pain assessment and treatment.

Keywords

  • Vignettes
  • Pediatric Intensive Care
  • Pediatric Nurses
  • Children’s Pain
  • Sickle Cell Disease

Passing an enhanced turing test - Interacting with lifelike computer representations of specific individuals

Journal paper
Gonzalez, A., DeMara, R., Hung, V., Leon-Barth, C., Elvir, M., Hollister, J., Kobosko, S., Leigh, J., Johnson, A., Jones, S., Carlson, G., Lee, S., Renambot, L., Brown, M.
Journal of Intelligent Systems, 22(4), 365-415.
Publication year: 2013

Abstract

This article describes research to build an embodied conversational agent (ECA) as an interface to a question-and-answer (Q/A) system about a National Science Foundation (NSF) program. We call this ECA the LifeLike Avatar, and it can interact with its users in spoken natural language to answer general as well as specific questions about specific topics. In an idealized case, the LifeLike Avatar could conceivably provide a user with a level of interaction such that he or she would not be certain as to whether he or she is talking to the actual person via video teleconference. This could be considered a (vastly) extended version of the seminal Turing test. Although passing such a test is still far off, our work moves the science in that direction. The Uncanny Valley notwithstanding, applications of such lifelike interfaces could include those where specific instructors/caregivers could be represented as stand-ins for the actual person in situations where personal representation is important. Possi- ble areas that come to mind that might benefit from these lifelike ECAs include health-care support for elderly/disabled patients in extended home care, educa- tion/training, and knowledge preservation. Another more personal application would be to posthumously preserve elements of the persona of a loved one by family members. We apply this approach to a Q/A system for knowledge preser- vation and dissemination, where the specific individual who had this knowledge was to retire from the US National Science Foundation. The system is described in detail, and evaluations were performed to determine how well the system was perceived by users.

Keywords

  • Embodied conversational agents
  • Chatbots
  • Animated pedagogical agents
  • Dialogue management
  • Automated question-and-answer systems.

A virtual patient to assess pediatric intensive care unit (PICU) nurses’ pain assessment and intervention practices

Conference paper
Lee, S., LaFond, C., Johnson, A., Vincent, C., Leigh, J., Renambot, L.
Intelligent Virtual Agents: Proceedings of the 13th International Conference (LNCS) (Vol. 8108, pp. 159–167)
Publication year: 2013

Abstract

Pediatric intensive care (PICU) nurses play a crucial role in managing children’s pain. While virtual patient (VP) technology has shown significant benefits in assisting with the practice of healthcare, there has been little research effort in this particular setting. This paper presents a pilot evaluation study to determine the validity of VP vignettes – including PICU nurses’ recognition of the facial expressions (smiling and grimacing) of VPs and nurses’ descriptions as to whether the VP vignettes are consistent with their professional experiences. The results of our initial study (n=20) confirm that nurses identified given expressions correctly (98.5%) and validated the similarity of the vignettes to their experiences with patients in the PICU (87%). A better understanding of nurses’ pain management practices will aid the development of future VP interventions.

Keywords

  • Virtual Patient Vignette
  • Pain Assessment
  • Intervention Practice
  • Pediatric

Performance evaluation of incremental vector clocks

Conference paper
Lee, S., Kshemkalyani, A.D., Shen, M.
Parallel and Distributed Computing: Proceedings of the 10th International Symposium
Publication year: 2011

Abstract

The vector clock is an important mechanism to track logical time and causality in distributed systems. Vector clocks incur an overhead of n integers on each message, where n is the number of processes in the system. The incremental vector clock technique attaches only the changed components of the vector clock to a message. This technique to reduce the size of the message overhead is popularly used. We evaluate the performance of the incremental vector clock technique via extensive simulations under a wide range of network loads and communication patterns. Our simulations confirm the intuition that this technique shows marked gains when application processes communicate with locality patterns. In addition, the simulations revealed the following behaviour: (i) the message overhead is not much dependent on the number of processes, (ii) a higher multicast frequency, as opposed to unicasting, lowers the message overhead, and (iii) a slower network speed relative to the inter-message generation time lowers the message overhead.

Designing an expressive avatar of a real person.

Conference paper
Lee, S., Carlson, G., Jones, S., Johnson, A., Leigh, J., Renambot, L.
Intelligent Virtual Agents: Proceedings of the 10th International Conference (LNCS) (Vol. 6356, pp. 64–76)
Publication year: 2010

Abstract

The human ability to express and recognize emotions plays an important role in face-to-face communication, and as technology advances it will be increasingly important for computer-generated avatars to be similarly expressive. In this paper, we present the detailed development process for the Lifelike Responsive Avatar Framework (LRAF) and a prototype application for modeling a specific individual to analyze the effectiveness of expressive avatars. In particular, the goals of our pilot study (n = 1,744) are to determine whether the specific avatar being developed is capable of conveying emotional states (Ekmanös six classic emotions) via facial features and whether a realistic avatar is an appropriate vehicle for conveying the emotional states accompanying spoken information. The results of this study show that happiness and sadness are correctly identified with a high degree of accuracy while the other four emotional states show mixed results.

Case study - Designing an advanced visualization system for geological core drilling expeditions

Conference paper
Chen, Y., Hur, H., Lee, S., Leigh, J., Johnson, A., Renambot, L.
Human Factors in Computing Systems: Proceedings of the 28th ACM Conference (pp. 4645-4660)
Publication year: 2010

Abstract

We present the design and process of an interactive high-resolution visualization system for diverse and distributed real-world geological core drilling expeditions. The high domain knowledge barrier makes it difficult for a person who is outside this field to imagine the user experience, and the globally distributed core drilling community imposes more design constraints in space and time. In addition to activities proposed in prior literatures, we used the “immersive empathic design” approach of having a computer scientist trained as a junior core technician. Through in-situ observation and interview evaluations from on-going expeditions, we present the system and the lesson learned in the process. It makes the best use of precious co-located opportunities. It allows the developer to build up domain knowledge efficiently. It establishes a trust relationship between the developer and scientists. The system designed through this approach formed a sustainable foundation that was adapted in the following design iterations. This process allows the software developer to experience authentic user activities. The designed system is innovative and helps scientists solving real-world problems. This approach can be a useful example to HCI practitioners who work with potential users or communities that share similar properties.

A semi-automated model for translating natural language queries into meaningful visualizations

Conference paper
Sun, Y., Leigh, J., Johnson, A., Lee, S.
Smart Graphics: Proceedings of the 10th International Symposium (LNCS) (Vol. 6133, pp. 184-195)
Publication year: 2010

Abstract

While many visualization tools exist that offer sophisticated functions for charting complex data, they still expect users to possess a high degree of expertise in wielding the tools to create an effective visualization. This paper presents Articulate, an attempt at a semi-automated visual analytic model that is guided by a conversational user interface to allow users to verbally describe and then manipulate what they want to see. We use natural language processing and machine learning methods to translate the imprecise sentences into explicit expressions, and then apply a heuristic graph generation algorithm to create a suitable visualization. The goal is to relieve the user of the burden of having to learn a complex user-interface in order to craft a visualization.

Keywords

  • Visual analytics
  • Natural language processing
  • Conversational interface
  • Automatic visualization

Design an interactive visualization system for core drilling expeditions using immersive empathic method

Conference paper
Chen, Y., Lee, S., Hur, H., Leigh, J., Johnson, A., Renambot, L.
Human Factors in Computing Systems: Proceedings of the 27th ACM Conference (pp. 2671-2674)
Publication year: 2009

Abstract

In this paper, we propose an immersive empathic design method and used it to create an interactive high-resolution core visualization system for real-world geological core drilling expeditions. A high domain knowledge barrier makes it difficult for a person from outside this field to imagine the user experience simply through observation. The globally distributed nature of the core drilling community imposes further design constraints. We used this approach to embed a computer scientist trained as a junior core technician. This process allowed the developer to experience authentic user activities and enabled the design of an innovative system for solving real-world problems. This approach made the best use of precious co-located opportunities, overcame the initial domain knowledge barrier, and established a trust relationship between the developer and the domain scientists. The system designed through this approach formed a sustainable and adaptive foundation that the domain scientists can build on. Through in-situ deployment, observation and interview evaluations from on-going expeditions, we present the advantages of this process.

Towards interactive training with an avatar-based human-computer interface

Conference paper
DeMara, R., Gonzalez, A., Hung, V., Leon-Barth, C., Dookhoo, R., Jones, S., Johnson, A., Leigh, J., Renambot, L., Lee, S., Carlson, G.
Industry Training, Simulation & Education Conference (I/ITSEC) 2008
Publication year: 2008

Abstract

The development of avatars has significant potential to enhance realism, automation capability, and effectiveness across a variety of training environments. Project Lifelike is a three-year National Science Foundation effort whose objective is to develop and evaluate realistic avatar interfaces as portals to intelligent programs capable of relaying knowledge and training skills. This interface aims towards support of spoken dialog within a limited domain and capabilities for learning to maintain its knowledge current and accurate. Research objectives focus on the integration of speaker-independent continuous speech recognition technology with a context-based dialog system and real-time graphics rendering capability derived from live subject motion capture traces. The motion capture traces are used by the avatar to provide spoken interaction with gestural expressions. This paper describes the first phase of the Lifelike project which developed an interactive avatar prototype of a National Science Foundation (NSF) program manager, Dr. Alex Schwarzkopf, for whom a contextual graph representation of domain knowledge was created. A Graphical Asset Production Pipeline was developed to allow digitization of the facial characteristics and physical movements of Dr. Schwarzkopf. Next, an example subset of his knowledge of NSF protocols was encoded in a grammar-based speech interpretation system and context-based reasoning system. These systems were integrated with the Lifelike Responsive Avatar Framework to enable the avatar to receive spoken input and generate appropriate verbal and non-verbal responses. The system demonstrates conveyance of knowledge within a limited domain such as NSF project reporting requirements. Work toward improving the realism of the avatar, long-term efforts toward creating a toolbox for generalization to other training applications, and results of evaluation of how users respond to different characteristics that contribute to realism in an avatar are discussed.