7 Method

Because the intersection of teacher induction and educators’ social media use is under-researched, I will start by interviewing ECEs to ask them about their experiences as new educators: what supports for professional learning ECEs seek during induction, if any; from whom; and how, if at all, they use social media with the intention of seeking supports and connections. From the themes inferred from these interviews, I will develop and then administer a quantitative survey to see how broadly other ECEs identify with these themes. This methodological approach follows an exploratory sequential mixed methods design (Creswell, 2014, p. 225), an open-ended (i.e., exploratory) approach where data collection occurs in two stages (i.e., sequential): qualitative data are collected first and then used to determine which quantitative data should be collected. In mixed methods notation, this design is written as QUAL → quan, and the procedure will consist of three stages: (1) interview, (2) surveys, and (3) interpretation.

This research design will allow me to develop “better measurement instruments by first collecting and analyzing qualitative data and then administering the instruments to a sample” (Creswell, 2014, p. 218). In this exploratory sequential design, the quantitative stage is directly dependent upon the earlier qualitative stage because the analysis of the interview data will be used to develop the survey instrument. Also, the two databases (i.e., qualitative and quantitative) are not compared in an exploratory sequential design because they are drawn from different samples; instead, the purpose of this design is “to determine if the qualitative themes can be generalized to a larger sample” (Creswell, 2014, p. 227). Specifically, three stages comprise this study: (1) the qualitative stage focused upon interviews, (2) the quantitative stage focused upon surveys, and (3) final interpretation.

Creswell (2011) noted the presence of numerous controversies raised by mixed methods research — including a foundational question of whether it is appropriate, or even possible, to mix paradigms. Additionally, even if researchers have acquiesced to the possibility of blending elements from different paradigms, practical concerns remain. Mixed methods require a large amount of data and depend upon expertise with multiple research techniques. For instance, in an exploratory sequential design, care must be taken when trying to quantify or generalize qualitative data. However, in my proposed study, I believe the strengths of the exploratory sequential mixed methods design outweigh the concerns. For the purposes of this study, “the combination of methods provides a better understanding than either quantitative method or qualitative method alone” (Creswell, 2011, p. 280). The exploratory sequential design will allow me to: (a) use qualitative interview data to understand ECEs’ induction experiences and who they turn to for support, as well as related social media use, if any; (b) develop a quantitative survey instrument based on emergent themes from the interviews; and (c) use the survey to generalize the qualitative findings. Because of the absence of prior research at the intersection of teacher induction and ECEs’ social media use, this type of sequential, inductive, mixed methods approach will improve instrument fidelity (Collins, Onwuegbuzie, & Sutton, 2006) by basing the survey scales on codes and themes drawn from interviews with early-career educators.

7.1 Stage 1 Methods: Qualitative Interviews

In this first stage, I will conduct interviews with early-career educators to gather qualitative data about ECEs’ induction challenges and social media use. I will collect data to the point of theoretical saturation (Glaser & Strauss, 1967) — what Glesne (2016) defined as the point when “successive examination of sources yields redundancy and that the data you have seem complete and integrated” (p. 194). In other words, I will continue to interview new participants until I am no longer finding new themes emerging from additional interviews.

1.1 Participants. The population of this study will be early-career educators (ECEs) working in the United States. Participants need not have majored in education as an undergraduate or pursued a graduate degree in education. They will likely be employed in a variety of subject areas. Following Ronfeldt and McQueen’s (2017) framing of ECEs, participants must have been employed in education for no more than three years.

From the population of U.S.-based early-career educators, I will select a purposeful sample (Lincoln & Guba, 1985) — a group of participants that will provide information-rich cases for in-depth study. I will seek to interview participants with a variety of backgrounds and experiences, purposefully seeking diversity in categories such as: (a) years of teaching experience, (b) grade level taught, (c) content area taught, (d) type of school (urban, suburban, rural), (e) zip code, (f) type of educator preparation (i.e., undergraduate, graduate, alternative certification, none), (g) ethnicity (Asian, Black/African, Causasian, Hispanic/Latinx, Middle Eastern, Native American, Pacific Islander, prefer to self-describe, prefer not to answer), and (h) gender (female, male, non-binary/third gender, prefer to self-describe, prefer not to answer). Finally, because of this study’s specific research purpose, I will collect information related to social media use: (i) social media platforms used, (j) length of time active on social media, (k) frequency of social media use (daily, several time a week, several times a month, rarely), and (l) professional purposes for social media use (following news and trends in education, seeking and giving emotional support related to job, finding and sharing educational resources and curricular materials, collaborating with colleagues in education, other).

Specifically, I will sample participants from current Master of Arts in Educational Technology (MAET) students at Michigan State University because of my connections to the MAET program. That is, I am an alumnus of the degree program and currently teach one MAET course each semester as part of my doctoral assistantship. This group is a convenience sample, but it is also a purposeful sample. Although there is uniformity among MAET students in terms of a shared initiative to pursue a graduate degree during their first three years as educators and a presumably shared interest in educational technologies, there remain differences within this group. Students enter the MAET graduate program from a variety of educator preparation programs — including some with no formal background in education — and are educators in a variety of roles, subject areas, and geographic locations around the U.S. A benefit of this sample is that I can be confident that all participants have been introduced to social media use for professional learning in education. This shared background results from nearly every MAET course requiring some form of social media activity as an assignment, and the MAET program itself relies on social media to communicate and interact with its students. Nevertheless, having taught many MAET students myself, I believe I will be able to select a purposeful sample that demonstrates varying attitudes toward and uses of social media. In sum, although purposefully sampling MAET students will result in a sample that is not truly diverse, these participants are advantageous because of their common background with social media for professional use and differences in attitudes toward social media. Thus, although this sample will not be generalizable, it will adequately serve the purpose of generating themes that can be built into a quantitative survey.

At the outset, I will seek permission from the MAET program coordinators to invite MAET students to participate in the interviews stage of my study. I will send a publicity email (Appendix A) to whatever list of MAET students I am granted access to and ask them to complete a Willingness to Participate form (Appendix B) through the online software Qualtrics (n.d.). At the start of each interview, I will ask for verbal consent by reading the script in the Interview Consent Form (Appendix C). Michigan State University’s Human Research Protection Program does not require a signature as an element of consent for “exempt” protocols (see https://hrpp.msu.edu/templates/consent.html), so I will only ask participants to verbally assent to this statement.

1.2 Pilot study. I anticipate asking interview questions (Appendix D: Initial Interview Protocol) grouped by topic into six sections. Following a first section where I ask background and warm-up questions, the second section will focus on challenges and struggles experienced as a new educator (RQ2). In this section, I will be prepared to ask follow up questions related to themes from the induction literature: everything is new, still learning to teach, developing professional identity (and related issues of agency and community). In the third section, I will ask about the types of supports sought during as a new educator (RQ1), including reasons why the interviewee may not have pursued supports (RQ2). I will also ask about the kinds of supports sought, drawing upon the literature for responses related to people orientation versus content orientation (Prestridge, 2019) and formal versus informal learning opportunities. In the fourth section, I will ask about interpersonal connections sought for supports as a new educator (RQ3), and I anticipate asking follow up questions about the types of individuals and groups that span local to global contexts. In the fifth section, I will ask questions about social media as a tool to access supports as a new educator (RQ4). Finally, in the sixth and final section, I will ask for any closing thoughts, specifically if there are any additional ways they would link their experiences as a new educator and their use of social media.

Each section builds upon the preceding section(s), and in each section I will ask “Why?” as a follow up question to seek deeper understanding of participants’ initial answers. Section 5 of the interview protocol is adapted from Carpenter and Krutka’s (2014) survey instrument, which asked educators about their use of Twitter for professional purposes. I will ask similar questions for each social media platform that interviewees report using for seeking supports and interpersonal connections during induction.

I will start by asking several participants to pilot my Willingness to Participate form (Appendix B) and Initial Interview Protocol (Appendix D). These pilot participants will share characteristics similar to the actual sample, but they will not be MAET students. I will recruit the pilot interviewees from my former undergraduate students in CEP 416 who have since started teaching positions. I will facilitate a mock interview with these participants on the video-communication software Zoom (n.d.). I will ask participants in these pilot interviews to respond not just to my questions, but to critically evaluate the questions themselves and give feedback on how questions could be improved. I will continue these pilot interviews to the point of saturation — that is, when I am no longer receiving new suggestions for how to improve the interview protocol.

1.3 Qualitative data collection. With an updated and revised interview protocol, I will host interviews on Zoom. I will also use Zoom’s built-in functionality to record audio and video of the interviews, and I will also take my own notes during the interviews. Because I am planning semi-structured interviews, the questions listed in the interview protocol (Appendix D) will serve as starting points, but the exact phrasing of questions will vary, and I will likely ask follow up questions based both on participants’ initial answers and expected themes drawn from the literature. For instance, if an interviewee says they do not use social media to access induction supports, I will ask them why not. I will conduct interviews until the point of theoretical saturation — in other words, when I am no longer finding new codes or themes in participants’ responses.

1.4 Qualitative data analysis. At the same time I am conducting interviews with new participants, I will also start transcribing completed interviews. I will do these transcriptions “by hand” — that is, by listening to the interview recordings and type out each word spoken during the interviews. However, I will not necessarily transcribe the full interview verbatim — some portions I will paraphrase, and others I will transcribe carefully. During my first listen to the recording, I will note major points in the conversation, and I will go back to listen a second time to those noteworthy portions. I will carefully and precisely transcribe these sections.

I will upload interview transcripts to the software ATLAS.ti (2019). After I have transcribed the interviews, I will send a copy of the transcript to the interview participants to ask them if there are additional points they would like to make in addition to what was captured by the written transcript. I will also specifically ask for clarification in any instances where their responses were inaudible.

After I have transcribed an interview, and in the midst of continuing to interview new participants and transcribe those interviews, I will conduct thematic analysis of the interview transcripts using the software ATLAS.ti (2019), which will help me track and compare codes and themes within and across interviews. I will follow Saldaña’s (2016) recommended procedures for open-ended qualitative analysis. Following his maxim, “Qualitative analysis calculates meaning” (Saldaña, 2016, p. 10), I will analyze the transcribed interview data for “consolidated meaning.” For the sake of simplicity as I propose this research, I will refer to these various possibilities for meaning as “themes,” while acknowledging that this meaning could take the form of categories, themes, concepts, or assertions.

I anticipate two distinct cycles for the thematic analysis process. Saldaña (2016) used the language of “cycles” to emphasize that these steps are iterative; both first cycle and second cycle contain processes of coding and recoding the data. The first cycle will involve the assignment of codes, that is, “a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data” (Saldaña, 2016, p. 4) to sections of the interview transcripts. These interview portions will be determined inductively, determined by what words, phrases, or even paragraphs seem to correspond to a code; the code is meant to capture the essential essence of the section (Saldaña, 2016). The second cycle coding will entail the reorganization and synthesis of the first cycle codes into new wholes, called categories; themes are then inferred from the categories (Saldaña, 2016). This means that during the second cycle, the number of codes should decrease in number and become more focused en route to developing broader categories and themes. After completing the second cycle of coding, I will test inter-rater reliability (IRR); I discuss IRR in more detail in the Trustworthiness of Research section. After completing the second cycle of coding and checking IRR, if there are still areas where I am unsure about my codes or categories, I will send that portion of the coded interview transcript to the interviewee and ask if they would like to provide further clarification. I will incorporate any additional input when developing the survey instrument.

Saldaña (2016) illustrated the fluid nature of this entire coding process through what he termed eclectic coding. This approach illustrates “how an analyst can start with an array of coding methods for a ‘first draft’ of coding, then transition to strategic ‘second draft’ recoding decisions based on the learnings of the experience” (p. 212). To capture insights during the unpredictable and non-linear process of thematic analysis, I will write analytic memos at each step, noting key observations, connections, and inferred meaning made during the coding process.

1.5 Trustworthiness of research. I will take several steps to help readers find the qualitative stage of this proposed research to be trustworthy. Maxwell (2013) described validity in qualitative research as “the correctness or credibility of a description, conclusion, explanation, interpretation, or other sort of account” ( p. 122). Creswell and Miller (2000) noted that they find themselves most closely aligned “with the use of systematic procedures, employing rigorous standards and clearly identified procedures” (p. 129) — validity practices they associated with the “postpositivist or systematic paradigm” (p. 126). As I have considered various approaches to validity in qualitative work, I too find the systematic procedures to be most persuasive. Creswell and Miller (2000) also argued that in contrast to the ways quantitative researchers make inferences, “qualitative researchers use a lens not based on scores, instruments, or research designs but a lens established using the views of people who conduct, participate in, or read and review a study” (p. 125). In qualitative research, I myself — as researcher — am the instrument used in inquiry (Patton, 2002). As is the case with any research, it is essential to reduce the error — or bias — of the instrument. Creswell and Miller (2000) suggested reducing bias by approaching qualitative analysis with three different lenses: the researcher, participants, and external readers; here I describe systematic procedures linked to each lens.

First, Creswell and Miller (2000) suggested looking at analysis through a researcher lens by triangulation: “a systematic process of sorting through the data to find common themes or categories by eliminating overlapping areas” (Creswell & Miller, 2000, p. 127). I will triangulate across participants in the inductive coding and categorization of thematic analysis of interview data.

Second, Creswell and Miller (2000) suggested looking at analysis through a participant lens by member checking: the process of confirming with participants that the themes and categories I inferred make sense and are sufficiently supported with evidence. For each interview, I will give participants two opportunities to provide member checks. First, after I have completed the written transcription of an interview, I will give the interviewee the opportunity to read through the transcript and add points they did not get to say during the interview. Second, after I have completed the second cycle of thematic analysis, I will send the coded transcript to the interviewee and point to one or two specific places where I was unsure about my code, categories, and themes. I will ask the interviewee if they would like to provide further clarification.

Third, Creswell and Miller (2000) suggested looking at analysis through an external reader lens by creating an audit trail: the process of “documenting the inquiry process through journaling and memoing, keeping a research log of all activities, developing a data collection chronology, and recording data analysis procedures clearly” (Creswell & Miller, 2000, p. 128). Throughout the study, I will create thorough documentation through analytic memos, and I will record and update notes and definitions of codes, categories, and themes in a codebook. I will ask at least one member of my dissertation committee to carefully read through this audit trail to ensure my findings are grounded in the data and my inferences are logical.

Finally, after the second cycle of coding, I will test inter-rater reliability (IRR). I will recruit a second coder from my doctoral program to apply definitions from the codebook to a portion of the interview transcripts — sections randomly selected from several interviews, representing approximately 10% of the interview corpus. I will then calculate IRR scores — both percent agreement and Cohen’s kappa — to ensure that the emergent coding categories and definitions in my codebook are clear and reproducible. The Cohen’s kappa scores for each coding category should be .61 or higher to achieve what Landis and Koch (1977) described as “substantial” agreement (p. 165). If the kappas are .60 or less, I will discuss coding discrepancies with the second coder, revise the codebook, and select a new random sample of 10% of the interview corpus. I will repeat these steps until the kappas show substantial agreement. Ensuring inter-rater reliability is particularly important in this research design because I am using the codes and themes from the qualitative analysis to construct a survey instrument, and I will subsequently ask many early-career educators to complete that survey. If my categories and themes are unclear to one doctoral student colleague, I would have reason to worry about the clarity of my resulting survey questions for non-academic educators I do not know.

7.2 Stage 2 Methods: Quantitative Surveys

In this second stage, I will develop and administer a survey to gather quantitative data. Because the purpose of this stage is to generalize the qualitative findings, I will aim to distribute the survey instrument as broadly as possible — to as many U.S. early-career educators as possible. I will regularly monitor responses as tracked in Qualtrics (n.d.), and I will continue to publicize the survey on Twitter (as described in the Participants section earlier) and send email reminders until I have confirmed complete and valid responses from a sufficiently large sample (at minimum, n ≥ 100) of early-career educators.

2.1 Participants. As was the case with the Stage 1 qualitative interviews, here in Stage 2 the population remains early-career educators (ECEs) working in the United States, employed in education for no more than three years. Participants need not have majored in education as an undergraduate or pursued a graduate degree in education, and they will likely be employed in a variety of subject areas.

2.2 Pilot study. I will start Stage 2 by building a survey instrument with questions and scales. The survey will be developed in and administered with Qualtrics (n.d.). DeVellis (2017) defined a scale as a measurement instrument consisting of a group of items combined into a composite score which is meant to demonstrate a degree or level of an underlying variable. He named the underlying variable a latent variable: a phenomenon of interest that is not directly observable. Scales help measure latent variables that cannot be assessed directly. Not every set of items is necessarily a scale; rather, each item that is part of a scale should be considered a test, even on its own, of the strength of the latent variable. (DeVellis, 2017). Finally, it is also helpful to note the differences between scaling and categorizing. Categorizing is a process of describing the presence or absence of a phenomenon, assuming an all-or-nothing state; in contrast, scaling is a process of describing the degree of a condition, assuming a continuum (DeVellis, 2017). A common item format is the Likert scale, which presents degrees along a continuum between opposite poles (e.g., Agree-Disagree, Very likely-Very unlikely) or frequency. Likert scales are good for measuring latent variables such as opinions, beliefs, and attitudes (DeVellis, 2017).

The survey instrument will begin with the Survey Consent Form (Appendix E). Although the exact content of the survey instrument will not be developed until after the qualitative stage is completed, I do expect the survey instrument to be organized into five sections of questions and scales related to each of my four research questions, parallel to the sections of the interview protocol:

  1. Types of supports for professional learning sought during induction (RQ1),
  2. Reasons for seeking supports for professional learning during induction and reasons for not seeking supports (RQ2),
  3. Interpersonal connections made for supports for professional learning during induction (RQ3),
  4. Social media as a modality to access supports for professional learning during induction (RQ4), and
  5. Background information matching the categories in the Willingness to Participate form (see Procedure Step 1.1 and Appendix B): (a) years of teaching experience, (b) grade level taught, (c) content area taught, (d) type of school (urban, suburban, rural), (e) zip code, (f) type of educator preparation (i.e., undergraduate, graduate, alternative certification, none), (g) ethnicity (Asian, Black/African, Causasian, Hispanic/Latinx, Middle Eastern, Native American, Pacific Islander, prefer to self-describe, prefer not to answer), and (h) gender (female, male, non-binary/third gender, prefer to self-describe, prefer not to answer). Finally, because of this study’s specific research purpose, I will collect information related to social media use: (i) social media platforms used, (j) length of time active on social media, (k) frequency of social media use (daily, several time a week, several times a month, rarely), and (l) professional purposes for social media use (following news and trends in education, seeking and giving emotional support related to job, finding and sharing educational resources and curricular materials, collaborating with colleagues in education, other).

With results from qualitative thematic analysis of the interview data in Stage 1, I will follow Creswell’s (2014) recommended process for moving from qualitative analysis to quantitative scale development in constructing sections 1-4 of the survey instrument: “using the quotes to write items for an instrument, the codes to develop variables that group the items, and themes that group the codes into scales” (p. 226). I offer an overview of this process in Table 2.

Table 2

Moving from Qualitative to Quantitative in Survey Creation

Qualitative Examples Quantitative Examples
Quotes ‘I couldn’t have made it through my first year teaching without my mentor.’ Items ‘frequency of talking to mentor’, ‘topics discussed with mentor’
Codes ‘mentor’, ‘coach’, ‘guide’, ‘expertise’ Latent Variables ‘value of having a mentor during induction’
Themes ‘mentoring is essential during induction’ Scales Mentor Induction Support Scale

As an example, if “mentoring is essential during induction” was a theme inferred from qualitative analysis, then a latent variable associated with this theme might be “the value of having a mentor during induction.” I would measure this variable through the development of a Mentor Induction Support Scale, likely modeled off of Strano and Queen’s (2012) survey development in their mixed methods study. To study identity management on Facebook, Strano and Queen (2012) interviewed 30 U.S. Facebook users to develop a scale for the Likelihood of posting, untagging, or requesting deletion of certain images. This scale asked: “How likely would you be to (post, untag, request deletion) an image which…” and then offered 13 scenarios. In my study, the Mentor Induction Support Scale might ask a similarly constructed question: “How important would it be for you to have a mentor during induction so you could…” and then offered a variety of scenarios based off of the qualitative interviews. This would be answered with a series of items, formatted as Likert scales presenting a continuum from “very important” to “not at all important.” I would describe this scale in my study report:

I will assess the latent variable “value of having a mentor during induction” using eight items on a 5-point Likert scale (range: 8-40) and calculate Cronbach’s alpha for this scale.

Cronbach’s alpha is a common measure of a scale’s internal validity, which I describe in more detail below. As a second example, to address RQ4, I will use a Social Media Usage Frequency Scale modeled off Valdez, Schaar, and Ziefle’s (2012) five-point-Likert scale (1 = daily, 2 = several times a week, 3 =several times a month, 4 = rarer, 5 = never).

In general, I will use 5-point Likert scales. During the scale development stage in Owen, Fox, & Bird’s (2016) exploratory sequential mixed methods study, they argued that five points on the scale achieves a balance between not overwhelming respondents with too many options and still having enough categories to be able to distinguish between them and find nuance between the different points. I am aware of the threat that bots pose to spam online surveys (e.g., the issues discussed in this tweet thread. I will take precautions to filter out bots from my responses, such as: requiring captcha verification from respondents, embedding “honeypot” items (i.e., question invisible to humans but visible to bots), and including a question asking “What is this survey about?” In addition, the Qualtrics (n.d.) survey software should help with additional bot detection. I will also consider the inclusion of validation items (DeVellis, 2017) used to detect competing influences on respondents’ answers. For instance, Devellis (2017) recommended the inclusion of a scale to measure social desirability (e.g., Strahan & Gerbasi, 1972) to determine if participants’ responses were unduly motivated by a desire to be perceived positively by the researcher.

I will refine the survey instrument by administering a pilot survey to several early-career educators — as with the interviews, students not in the MAET program but still ECEs. As with the interview pilot, I will start by asking former students from CEP 416 who have started teaching positions. I will ask these respondents to offer feedback on the initial survey instrument and suggestions for improvement. I will pilot the survey with new participants to the point of saturation — that is, when I am no longer receiving new suggestions. With this feedback, I will update the survey instrument.

After refining the survey instrument with pilot feedback, I will calculate Cronbach’s alphas as measures of internal reliability for each scale included in the survey instrument. Remler and Van Ryzin (2011) defined this type of reliability as the “internal consistency of the various indicators. Do the indicators or items go together — do they paint a consistent picture?” (p. 122). DeVellis (2017) argued that, despite its limitations, Cronbach’s alpha is still useful because it provides a lower bound, or conservative estimate of internal reliability. I will follow DeVellis’ (2017) recommendations that a Cronbach’s alpha score between .65 and .70 is minimally acceptable, between .70 and .80 is respectable, and between .80 and .90 is very good. Because a scale’s alpha is influenced by both the number of items in the scale and the degree of covariation among the items, there is a tradeoff between keeping a scale short (which is desirable for respondents’ time) and lengthening the scale (which is desirable for increasing reliability). Thus, if a scale’s alpha is below .70, I will identify any items with lower-than-average correlation with the other items and test to see the effect of removing those items from the scale. If their average correlations are not sufficiently below the mean that their removal increases the alpha, I will instead add additional items to the scale as the way to increase the alpha. On the other hand, if a scale’s alpha is above .90, I will reduce the number of items in the scale (DeVellis, 2017), which also reduces the burden on the respondent.

2.3 Quantitative data collection. Following the distribution strategies of survey studies such as Carpenter and Krutka (2014) and Trust et al. (2016), I will publicize the survey on Twitter broadly over a period of time. First, I will send a series of publicity tweets (Appendix F) using different education-related hashtags (e.g., #NT2T, #ntchat, #isteten, #PSTPLN, #Edchat, #sschat, #iteachmath, #MTBoS, #NCTEchat, #engchat, #langchat, #michED, #educolor, #cleartheair, #eloned, #edcamp, #edumatch, #SocialMediaEd, #spedchat, #t2t, #teacheredchat, #teachertraining, #satchat, etc.) over the course of numerous weeks. Second, I will also ask teacher-educator colleagues (e.g., Koehler, Greenhow, Rosenberg, Greenhalgh, Akcaoglu, Carpenter, Morrison, Krutka, Trust, Heath, Dennen, Bagdy, Kessler, Lachney, Yadav, Foster, Peterson, etc.) to help tweet and retweet the invitation to complete the survey. Third, I will email a link to the survey to all respondents to the initial call to MAET students (see step 1.1) who were not chosen for interviews. Fourth, at the end of the survey instrument, I will ask respondents to also consider sharing the survey with their own networks (i.e., snowball promotion). As a final note, interview participants will not be eligible to complete the survey, because this would “introduce undue duplication of responses” (Creswell, 2014, p. 227).

2.4 Quantitative data analysis. With all survey data collected, I will conduct quantitative analyses using the statistical computing software R (R Core Team, 2018). Because the purpose of this quantitative stage is to generalize the qualitative findings, I will focus on calculating descriptive statistics summarizing survey results. More specifically, I will calculate mean, standard deviation, median, and range for each survey item. This will allow me to answer the research questions not as deeply or with as thick description as through qualitative interviews, but I will be able to describe which induction supports are most desired (answering RQ1), the most common reasons ECEs report for seeking supports for professional learning during induction and reasons for not seeking supports (answering RQ2), the most common types of interpersonal connections for induction support (answering RQ3), and how often they use social media with this intent and which platforms are preferred (answering RQ4).

2.5 Validity and reliability. I will attend to the trustworthiness of this quantitative stage of research by following procedures to increase validity and reliability. To start, I acknowledge that my survey distribution strategy is its own form of sampling bias, replete with external validity issues. Still, I will check the construct validity of these quantitative analyses by asking: “Does the measure behave in a statistical model in a way that would be expected, based on theory and prior research?” (Remler & Van Vyzin, 2011, pp. 113). This reporting will be a priority in my discussion of the results of this study. I will also follow one of the validity procedures from the qualitative stage here as well: the audit trail conducted by an external reader. I will ask a colleague to review my R code and run the analysis script. This will ensure that I have documented my code sufficiently for someone else to understand and that the program runs on a machine other than my own computer. Finally, I will check internal reliability by calculating Cronbach’s alphas for each scale included in the survey instrument. During the pilot study I will check alphas and adjust the scales (by adding or removing) items as necessary. I will also check alphas after the final collection of surveys, noting any issues with internal reliability in my final report of the study.

7.3 Stage 3 Methods: Interpretation

Finally, after completion of the qualitative interview stage and and quantitative survey stage, I will conduct a final interpretation of my findings. As I stated earlier, I will not compare the qualitative findings to the quantitative findings because they are inferred from different samples. Instead, I will note which of the qualitative findings seem to be generalizable to the larger survey sample, as inferred from the quantitative stage.