Please post a scholarly reply that are about 150 words each.
There are 3 different posts below. Read and come up with about 150 words, scholarly written to each of them.
It is NOT required that every question or point be analyzed; more focused discussions on one or more central points are encouraged.
Original questions and posts:
Post 1
Q1: Review and discuss how to develop interview questions.
When developing interview questions, the researcher must keep in mind the construct they are attempting to describe without being biased (McGregor, 2018). A best practice is to employ open-ended questions, what McGregor refers to as “an interview guide,” and meaningful prompts to keep the interview on track and address any lulls within the flow of conversation (p. 244). McGregor (2018) also recommends sending follow-up questions to participants to address any issues that arose “during the data collection process” (p.244). The type of questions is also dependent upon the format of the interview. Informal conversation interviews require less structure and may invoke more spontaneous responses (McGregor, 2018).
Q2: How do interview questions differ from research questions?
Research questions focus on the research itself and set the stage for how researchers will collect data from participants (McGregor, 2018). Interview questions are those presented to participants in an attempt to answer a research question, describe a phenomenon, or explore a relationship. According to Creswell (2005), “Research questions in qualitative research help narrow the purpose of the study into specific questions” (p. 136).
Q3: Discuss validity and reliability in quantitative research.
Validity in research relates to how trustworthy the information is as presented by the researcher (McGregor, 2018). It is crucial to be critical of research in order to expose its flaws so that the forward progression of information can take place and future research can utilize the information with confidence. According to McGregor (2018), there are four main types of validity to take into consideration: internal validity, external validity, logical validity, and internal consistency. Each one carries a portion of the whole truthfulness of the study and should be considered by the researcher at every step in the process. Reliability relates to the ability of others to replicate the research given the same design and produce similar results (McGregor, 2018). The method must be reproducible and appropriate information must be provided so that others can successfully carry out the research (McGregor, 2018).
Q4: Describe how you will measure all of your variables in your proposal, including survey names and references, as well as how you know they are valid and reliable measures.
I plan to use the Teas (1981) survey instrument, which will be based on a modified Sims et al., (1976) scale to identify phenomena within the domestic palliative care industry. The survey instrument used to collect qualitative data has been validated in previous studies, and the adaptation will not alter the effects of the instrument, only addressing a specific population. This will enhance validity and allow others to reproduce the study, promoting reliability.
Reference
s
McGregor, S. (2018). Understanding and evaluating research: A critical guide. Sage Publications.
Pyrczak, F., & Tcherni-Buzzeo, M. (2019). Evaluating research in Academic journals: A practical guide to realistic evaluation. Routledge.
Sims, Henry P., Jr., Andrew D. Szilagyi, and Dale R. McKemey (1976), “Antecedents of Work-Related expectancies,” Academy of Management Journal, 19 (December), 547-59.
Teas, R. K. (1981). An empirical test of models of salespersons’ job expectancy and instrumentality perceptions. Journal of Marketing Research, 18(2), 209-226.
https://doi.org/10.1177/0734282914551956
Post 2
Q1: Researchers use interview questions determine what questions they would like to answer in their research. Interview are generally opened ended questions using active verbs to define what they would like the direction of the research to take. There are two type of questions, central questions and subquestions. The question is the overarching question the researcher wishes to explore in their study. Subquestions will narrow the focus of the central question and ask specific questions and seeks to learn from the study.
Q2: Research questions are what the researcher is trying to answer through the research. Interview questions are very similar to research questions, but these questions are used more in the qualitative study in focus groups. The questions can be open ended and is answered by the group participating in the study.
Q3: Heale and Twycross (2015) provided a scenario in their article that help me understand the difference between validity and reliability. In this scenario of using an alarm clock, an alarm set every day at 6:30 am, but the alarms goes off at 7 am each day is considered reliable. The alarm is considered reliable because is performs the same results every day when the alarms sounds at 7 am. It is reliable because the result of the alarm going off at the same time is consistent, the same day every day. Reliability and consistency works together. If the date is consistent then it is reliable. Validity is how accurate the study is. The alarm clock is no accurate because it is going off 30 minutes later than what the alarm was set for.
Q4: I am still conducting additional research. I am looking at studies through Google Scholar where the data has been validated and well as looking for the quantitative data with a reference to Chronback’s Alpha. Over the next few days, I am going back through my research in order to tighten up my Quantitative proposal, and making sure my sources line up with the research and I have done.
Reference:
Pyrczak, F., & Tcherni-Buzzeo, M. (2019). Evaluating research in Academic journals: A practical guide to realistic evaluation. Routledge.
Creswell, J. (2005). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. Prentiss Hall. pp.136-141
Heale, R. and Twycross, A. (2015) Validity and reliability in quantitative studies. Evid Based Nurs. July 2015. 18; 3
http://dx.doi.org/10.1136/eb-2015-102129
Post 3
Q1: Review and discuss how to develop interview questions.
Interview questions should not lead interviewees into the perspective or bias of the author but should be open-ended and provide questions that do not state “why?” (Pyrczak & Tcherni-Buzzeo, 2019; Crewell, 2005). There should be some background and context provided with the qualitative questions that help the interviewee grasp the criticality of the research endeavor ((Pyrczak & Tcherni-Buzzeo, 2019). A deductive approach can work for developing interview questions if there is “sufficient conceptual treatment of the phenomenon” meaning that the construct has straightforward content for generation items and examples from scholarly references (Diehl et al., 2020, p. 7).
Q2: How do interview questions differ from research questions?
While interview questions are open-ended to provide individuals with true unbridled flexibility and creativity to share unique perspectives, qualitative research questions are more rigorous to address the problem or aim that is more overarching. Research questions can be grouped into two classifications that are noted as “the central question and the subquestions” (Crewell, 2005, pg. 1). Interview questions are the direct questions that are used in the survey provided to the participants and do not necessarily provide an all-encompassing view of the ultimate answer sought in the investigation.
Q3: Discuss validity and reliability in quantitative research.
It is still considered best practice to incorporate several “multiple measures” methods to help with errors that exist in one method (Pyrczak & Tcherni-Buzzeo, 2019, p. 90). References may also be required to provide readers with additional input for further information and data about the research goal. Due to the sensitivity of some topics, like “sexual orientation and income”, when individuals are self-reporting, the data could be skewed based upon preconceived ideas of retaliation or rebuttal from others, whether the inhibitor only exists in the conscious thought or a material reality (Pyrczak & Tcherni-Buzzeo, 2019, p. 93). One way to address this lurking variable of validity and reliability in surveying is to allow the participants to remain anonymous or indicate that the results are confidential (Pyrczak & Tcherni-Buzzeo, 2019, p. 93). Observation techniques can also impact quantitative research outcomes, so it is worth prudentially considering the dynamics and culture of the environment (Pyrczak & Tcherni-Buzzeo, 2019).
Another consideration in quantitative research design also focuses on whether the design incorporates any level of experimentation (McGregor, 2018). For a non-experimental design that we will conduct in our dissertation program, these efforts will be based upon ascertaining the following criteria of variables, control groups, and relationship comparisons (McGregor, 2018).
Q4: Describe how you will measure all of your variables in your proposal, including survey names and references, as well as how you know they are valid and reliable measures.
When measuring all of the outcomes in my proposal, one key question will be whether there are any areas where I may have provided any subjectivity in my assessment. Suppose there is concern about whether subjectivity impacts my valid and reliable evaluation. In that case, it may behoove me to have several observers establish a “rate of agreement” so that the best evaluation is made in conscious or unconscious bias (Pyrczak & Tcherni-Buzzeo, 2019, p. 95). It is essential as well with all variables to consider whether or not the connection between the measures is “internally consistent”; Cronbach’s alpha is a factor that I will use to help rationalize whether or not inconsistencies are present in the questioning (Pyrczak & Tcherni-Buzzeo, 2019, p. 96). My surveys can incorporate various naming and referencing scales to help identify the proper name and connected reference to determine whether the data type is nominal, ordinal, interval, or ratio-based (McGregor, 2018). Nine statistical variables could be evaluated on various scales (Dependent & Independent, Criterion & Predictor, Constraints, Controls, Moderator, Intervening, Extraneous & Confounding, and Spurious) (McGregor, 2018).
Another means to determine validity and reliability in my proposal is to conduct a temporal stability test where the same test is given in two different periods (Pyrczak & Tcherni-Buzzeo, 2019, p. 95) . This was like when Dr. Lilleboe ran a pilot test where she first tested 10% of our target to be certain that her actual results demonstrated temporal stability (Lilleboe, 2021). “Evidence of Empirical Validity” is another measure to be certain that whatever system of measurement in the study, the units of detection make logical sense so that the scale will be interpretable by the readers and other researchers (Pyrczak & Tcherni-Buzzeo, 2019, p. 99). Lastly, it behooves me to evaluate my data and discuss the gaps and limitations so that other researchers can consider analyzing my work for intellectual development and encouraging more avenues for diverse perspectives (Diehl et al., 2020).
Reference
Creswell, J. (2005). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. Prentiss Hall. pp.136-141.
Diehl, A. B., Stephenson, A. L., Dzubinski, L. M., & Wang, D. C. (2020).Measuring the invisible: Development and multi-industry validation of the gender bias scale for women leaders. Human Resource Development Quarterly, 31(3), 249-280. https://doi.org/10.1002/hrdq.21389
Lilleboe, A. (2021). Dissertation proposal defense, retrieved from https://blackboard.indianatech.edu/webapps/blackboard/content/listContent.jsp?course_id=_414801_1&content_id=_2702583_1&mode=reset
McGregor, S. (2018). Understanding and evaluating research: A critical guide. Sage Publications.
Pyrczak, F., & Tcherni-Buzzeo, M. (2019). Evaluating research in Academic journals: A practical guide to realistic evaluation. Routledge.