Byungkyu (Jay) Kang FourEyes Lab, Computer Science TweetProbe: A Real-Time Microblog Stream Visualization Framework |
|||
As the importance of social media in our daily life increases, most of people using it witness its significant impact on numerous practices such as business marketing, information science and social sciences. For instance, user-generated information from a few major microblogs are used in order to find consumer patterns on certain type of products. Furthermore, social scientists analyze and predict voting tendencies towards the candidates in a national election. There have been countless research projects conducted on social media datasets in the fields of information science, journalism and so on. In this project, we would like to experiment different visualization techniques with real-time data stream from the major microblog service: Twitter. The project is named as 'TweetProbe' since this visualization framework is designed to present patterns of metadata, topical distribution (in terms of emerging hashtags) and live activities on the current time-window. Particularly, the short time-window used in this project is the key component since it enables users of this application to detect real-time trends, local events, natural disasters and spikes of social signals at microscopic level in time frame. |
|||
Project Motivation Objective Develop a Social-stream Visualizer which provides... The project is named as 'TweetProbe' since this visualization framework is designed to present patterns of metadata, topical distribution (in terms of emerging hashtags) and live activities on the current time-window. |
|||
System Architecture A. Twitter Stream (Excerpt from dev.twitter.com) B. Under the Hood (Backend Data Processing) 1. C. Raw data [Tweet Metadata] [User Metadata] D. Front End Visualization Layer D.1. Raindrop Message Visualizer |
|||
The iPhone application provides a minimal yet natural interface for interaction with the installation. The application tracks finger positions on screen, (as seen by the white square in the images above), taps, proximity, device orientation, and audio input (from the microphone) to provide multimodal input to the installation. The application provides multimodal feedback using the display, speaker, and vibration capabilities of the phone. These input and output mechanism are critical in allowing an audience member to engage and feel part of the installation. The Processing application handles how a user is displayed and is sonified in the installation. Additionally, the application is able to differentiate users from each other by uniquely representing every participant with their own avatar on screen. When a user turns on the iPhone application he/she connects to the Processing application, where orbs represent individual participants in the installation. In the first image above there are three participants in the installation. The user is able to use their phone screen as a track pad to move their avatar in the virtual space. The iPhone's orientation will determine the color of their phone screen, which correlates to the color of their orb in the virtual space. Moreover, if a user covers the proximity sensor on their phone then their avatar is hidden in the installation space (their orb disappears). When a participant is hiding he/she is not able do anything else in the virtual space until they come out of hiding. Moreover, if the user was to tap of their screen with two fingers they would trigger a sound in the installation, moving around in the installation also triggers their sound. Moreover if a user taps on their screen with three fingers they release a missile into the virtual space, which can hit other participants in the space. |
|||
The iPhone application provides a minimal yet natural interface for interaction with the installation. The application tracks finger positions on screen, (as seen by the white square in the images above), taps, proximity, device orientation, and audio input (from the microphone) to provide multimodal input to the installation. The application provides multimodal feedback using the display, speaker, and vibration capabilities of the phone. These input and output mechanism are critical in allowing an audience member to engage and feel part of the installation. The Processing application handles how a user is displayed and is sonified in the installation. Additionally, the application is able to differentiate users from each other by uniquely representing every participant with their own avatar on screen. When a user turns on the iPhone application he/she connects to the Processing application, where orbs represent individual participants in the installation. In the first image above there are three participants in the installation. The user is able to use their phone screen as a track pad to move their avatar in the virtual space. The iPhone's orientation will determine the color of their phone screen, which correlates to the color of their orb in the virtual space. Moreover, if a user covers the proximity sensor on their phone then their avatar is hidden in the installation space (their orb disappears). When a participant is hiding he/she is not able do anything else in the virtual space until they come out of hiding. Moreover, if the user was to tap of their screen with two fingers they would trigger a sound in the installation, moving around in the installation also triggers their sound. Moreover if a user taps on their screen with three fingers they release a missile into the virtual space, which can hit other participants in the space. |
|||
Future work: ... |
|||
REFERENCES
|
|||