April 27, 2018
Corporations worldwide are investing in leadership training for their managers, and Pearson Online & Blended Learning (OBL) is no exception. OBL stands as a leader in virtual K-12 education and most staff work remotely, fully immersed in their virtual workplace. Online meetings including text chat and streaming video, as well as webcasts are commonplace for all employees. Therefore, a natural evolution of OBL’s training initiative one facet entitled LeaderCast (LC), a monthly management training webcast which consists streaming of a pre-recorded video followed by text chat-based Live Questions & Answers with the managers featured in the video. LC launched in August 2017 and will close out the 2017-2018 school year with its ninth episode during the writing of this paper.
Each month, approximately one-third of invited managers log in to the LC webcast, which OBL deems an acceptable attendance rate. However, participation in the Live Q&A, defined by an attendee typing into the webcasts’ chat pod, averages an unacceptable level of 12% of attending managers. Participation in online trainings is an important measure as it gauges attendee’s engagement with content, and how well they’re learning the content (Hrastinski). OBL tasked LC with raising their participation rates to an average 40% of attendees during the 2018-2019 school year. In order to raise LC’s participation rate, OBL must discover what motivates LC attendees to utilize the chat pod as well as what discourages them from engaging with the Live Q&A.
The scope of this project is limited to the 369 managers on the LC monthly invite list, who all reside within the United States. While this research will make for an interesting comparison to OBL’s K-12 webcast engagement data, it is unlikely that any the results of research will be generalizable to the K-12 population, but data may be applicable to employees at lower levels within the company, or to employees at smaller companies.
Multi-modal web conferencing such as that utilized for LC has risen to popularity within the past decade, leaving little time for researchers to conduct work. Furthermore, online technology and users’ relationship to their technology is constantly evolving, meaning that any existing research could quickly become irrelevant.
Existing research support’s OBL’s goal of raising participation rates by concluding that online students who participate meaningfully in online learning environments typically earn higher grades than their non-participatory peers (Hrastinski, He). In terms of meaningful participation, He’s study found a positive relationship between number of questions asked during chat and a student’s final grade, but no relationship between any other type of chat interaction and final grade (p. 97). Although the managers attending LC are not receiving grades, willingness to ask questions of each months’ quest speaker and their synthesis of the knowledge presented during LC sessions may be measurable by other data points, such as their direct reports’ employee satisfaction survey results and turnover rates.
An earlier study of online chat similar to that utilized by LC noted that the chat media’s “lack of bandwidth and multiple users per channel makes it more difficult to initiate interactions than in [face-to-face] interaction,” which may still ring true for the approximately 100 managers who log in to LC each month (Rintel & Pittam, p. 530). In He’s context analysis of college students using online chat during a virtual lesson, “students were much more active in interacting with their peers than with their instructors” (p. 98). Despite inherent lack of social cues in a chat window, students are still more willing to interact with peers than teachers.
Interestingly, a content analysis of Google Hangouts chats found a barrier to communication when changing modes from video to text (Rosenbaun, Rafaeli, & Kurzon). This opens the possibility that LC attendees are less likely participate using a less rich modality (text chat) than the pre-recorded video experienced immediately prior. However, the Google Hangouts study examined public hangouts without informed consent of participants, a stark contrast to the episodic and anticipated nature of LC episodes wherein managers understand their participation is being monitored.
One researcher found that employee training sessions to be more effective when attendees were “oriented about the importance of [the] training and it’s usefulness” beforehand (Ibraim, p. 154). This suggests that OBL should treat LC episodes as only part of a larger initiative, and examine overarching communication regarding management training as having a an impact on effectiveness.
Existing research also calls for a need to “investigate student perspectives,” and “explore the development of student discourse over time” (Burnett, p. 259). This study aims to explore that knowledge gap by examining LC attendees as students in their leadership development training.
This study will examine online participation during a monthly management training webcast (LC). Attendees are all managers with Pearson OBL, a virtual education company who’s employees mostly work remotely and communicate exclusively via online technology. For the purposes of this study, “participation” is defined as an LC attendee typing into the webcast’s chat pod during the Live Q&A portion of the training.
The participation data on which this study is based comes from the first five LC episodes, aired monthly between August 2017 and January 2018. LC utilizes Zoom as their webcasting platform, as Zoom provides reliable streaming technology for large group meetings and provides LC organizers with a log of sign-ins and sign-outs, as well as a log of chat pod activity. From this information, OBL challenged the LC team with a goal of raising participation rates from ~12% to ~40%. Before evolving LC to meet this goal, the team must understand what aspects of their episodes encourage and discourage participation.
LC’s overarching research question is: How can we increase overall attendee motivation to communicate via the chat pod during Live Q&A? While we know that participation numbers are the dependent variable, deductive research must be conducted in order to determine an independent variable (or variables). To efficiently address this question while utilizing the data available from Zoom, we propose a three-pronged mixed-method communications study.
First, we will conduct a content analysis of the chat logs to categorize types of communications, as well as record the time duration of each chat sessions and the names of any participants who chat more than their peers. The unit of analysis is one chat message, regardless of how many thoughts or sentences a participant includes in their message. We will follow He’s categorization of student messages by using three categories: questions related to the training topic, check-ins and check-outs, and all other chat messages. For quantitative analysis, communication types will be assigned numbers for a nominal level of measurement: questions will be coded as 1, check-ins and –outs as 2, and all other messages as 3. Comparing the types of communication send by participants will help LC estimate the percentage of students who are truly learning, as well as suggest how large of a gap must be covered to reach OBL’s goal of 40% participation.
A pre-test to ensure inter-coder reliability will be conducted using two LC episodes selected via simple random sampling, such that each of the nine episodes has an equal chance of being chosen. During the pre-test and full content analysis, coders will be given LC episodes based on random assignment, so that each coder has an equal chance of receiving each episode.
Since there is a relatively small amount of data available for coding – nine episodes once this research is conducted – LC will also pull out participant’s questions for qualitative analysis in search of common themes. By identifying commonalities across chat questions over time, LC episodes can be tailored to address frequent themes, which may in turn increase participation. Recording the duration of each chat session and names of most participatory managers during content analysis mainly aids in reference during focus groups. However, comparing this additional chat data to the main content analysis may prove enlightening, such as exploring a possible relationship between chat duration and number of questions asked. Any data that may suggest a causal relationship to the dependent variable informs the following step of the study.
After a content analysis, we will conduct a focus group of LC attendees to explore their motivations and barriers to chatting during Live Q&A. This focus group defines an LC attendee as any manager who has signed on to any LC episode, as logged by Zoom. Zoom provides LC organizers with a comprehensive list of sign-on’s for each episode that may be combined into one document and cleared of duplicates.
Focus group participants will be chosen via simple random sample from the final, cleaned list of LC attendees. In accordance with Wimmer & Dominick’s best practices, our focus group is limited to six people, plus the moderator. Here, LC can reference the list of most participatory managers compiled during content analysis to ensure that no more than one is included in the focus group. With only six focus group participants, two or more extremely participatory attendees will bias results. If the initial random sample is deemed unsatisfactory for this reason, a completely new random sample of names will be generated.
The LC focus group will be held via video chat on Google Hangouts, a platform with which OBL employees are required to be familiar. Google Hangouts allows users to communicate via text, audio, and audio/video synchronously, and Hangouts sessions can be fully recorded for later analysis. (Due to the remote nature of working at OBL, an in-person focus group is impossible.) LC’s focus group will take place during a similar day and time as a typical LC episode, and last an hour like LC episodes, in order to ensure maximum availability of group participants.
During the focus group, the moderator will introduce the nature of the research, gently reinforcing why online participation is the best indicator of online learning. The six participants will be guided to discussions on topics that our literature review suggested might inhibit propensity to communicate during online training: technological issues, multi-modal difficulties of communicating via text after a video, and lack of understanding towards the importance of training. The moderator will also encourage participants to candidly discuss other barriers to chat pod communication specifically during LC.
To end the focus group on a positive note before sending managers to work the rest of their day, the moderator will guide them in discussing what aspects of LC encourage their participation. Based on the literature review, these topics include: training organizers encouraging interaction, affirming comments, and maintaining multiple threads or pods of chats. Finally, focus group participants will be asked to discuss other factors of LC (or other large webcasts) that motivate them to participate in chat.
After transcribing and qualitatively analyzing the focus group session, the culmination of our research is an online Google survey sent to a census of all 369 managers on the LC monthly invite list. The survey will be sent between to LC episodes (2 weeks after one and 2 weeks prior to the next) to mitigate the recency effect wherein a respondent may only answer based on the most recent LC episode. The survey will be limited to 10 such questions to minimize respondent dropout and fatigue, and sent from an official OBL account explaining that survey participation is mandatory. One week after the initial e-mail, a reminder will be sent to non-respondents.
While it is difficult to assert what topics the survey will cover before conducting the preliminary research, we aim to collect ordinal-level data by constructing the survey with Likert and Semantic Differential Scales. Respondents will likely be asked to rate their attitude toward LC and it’s Live Q&A component, as well as rating the presence of motivators and inhibitors towards utilizing the chat pod that were discovered during the focus group.
With this three-pronged mixed methodology conducted in order, the LC team will be armed with a wealth of qualitative and quantitative data, learning why their attendees do and do not participate in the text chat Live Q&A, and receive direct feedback on how to improve LC episodes to encourage more participation throughout the 2018-2019 school year.
Burnett, C. (2003). Learning to Chat: Tutor participation in synchronous online chat. Teaching in Higher Education,8(2), 247-261. doi:10.1080/1356251032000052474
Clay, C.. Great Webinars : How to Create Interactice Learning That Is Captivating, Informative, and Fun, Center for Creative Leadership, 2012. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/wsu/detail.action?docID=818165.
He, W. (2013). Examining students’ online interaction in a live video streaming environment using data mining and text mining. Computers in Human Behavior,29(1), 90-102. doi:10.1016/j.chb.2012.07.020
Hrastinski, S. (2009). A theory of online learning as online participation. Computers & Education,52(1), 78-82. doi:10.1016/j.compedu.2008.06.009
Ibrahim, M. (2004). Measuring Training Effectiveness. Journal of Management Effectiveness,4(3), 147-155.
Rintel, E. S., & Pittam, J. (1997).
Strangers in a Strange Land Interaction Management on Internet Relay Chat. Human Communication Research,23(4), 507-534. doi:10.1111/j.1468-2958.1997.tb00408.x
Rosenbaun, L., Rafaeli, S., & Kurzon, D. (2016). Corrigendum to “Participation frameworks in multiparty video chats cross-modal exchanges in public Google Hangouts” [J. Pragmat. 94C (2016) 29–46]. Journal of Pragmatics,97, 93. doi:10.1016/j.pragma.2016.05.005
Wimmer, R. D., & Dominick, J. R. (2013). Brief Guide for Conducting Focus Groups. Retrieved April 19, 2018, from http://www.rogerwimmer.com/mmr10e/mmrfocusgroups.htm