researchandeducation.ro

Preview meta tags from the researchandeducation.ro website.

Linked Hostnames

6

Search Engine Appearance

Google

https://researchandeducation.ro/

Discerning Myths from Methods: University Faculty’s Understanding of Learning Science and Metacognition on Pedagogy

Download PDF Article Download Graphical Abstract Discerning Myths from Methods: University Faculty’s Understanding of Learning Science and Metacognition on Pedagogy     Abstract This study assessed the pedagogical knowledge and metacognitive awareness of pedagogy of faculty (N = 107) at a large state university in the United States. The purpose was to ascertain whether faculty could distinguish effective learning practices from ineffective ones, as determined by empirical research in learning science. Faculty responded to items regarding the efficacy of effective practices and others shown by research to be neuromyths or misconceptions. Faculty across all colleges correctly identified most of the effective practices but also endorsed myths/misconceptions, ultimately showing limited pedagogical knowledge. Tenured faculty showed stronger pedagogical knowledge than newer faculty. Faculty were also assessed on their confidence in their knowledge of pedagogical practices. Respondents demonstrated poor metacognitive awareness as there was no relationship between confidence in pedagogical knowledge and actual pedagogical knowledge. Surprisingly, education faculty scored no better in pedagogical knowledge than faculty of any other college and also showed low metacognitive awareness. Results indicate that universities preparing doctoral students for faculty positions should ensure candidates are exposed to accurate information regarding learning science. The implications for colleges of education are more dire in that they may be failing to prepare candidates in the most essential aspects of the field.   Keywords pedagogy, learning science, cognitive myths, metacognition, faculty, teaching methods JEL Classification I20, I21, I23   1. Introduction Teaching, whether at the university level or K-12 level, is generally considered to be comprised of two broad fields of knowledge: content knowledge and pedagogical knowledge. There is little doubt as to the relative value of content knowledge because instructors would be unable to guide student learning on a subject they had little understanding of themselves. In this scenario the instructor would not be able to recognise or facilitate learning of accurate information, nor would the instructor be able to address misconceptions. Furthermore, without sufficient content knowledge the instructor also cannot effectively scaffold student learning (Kirschner & Hendrick, 2020). In all fields, at the foundational level, content knowledge provides the structural groundwork for teaching and learning. The second of the two broad fields of knowledge essential to teaching, pedagogical knowledge, focuses on how humans learn and the processes that may best facilitate that learning. Pedagogical knowledge may entail sociocultural aspects of learning, such as an understanding of how the learning environment may impact the ways students respond to instruction or how language and culture influence a child’s construction of their own knowledge. Pedagogical knowledge also draws upon cognitive science, or more specifically, what has become known as the learning sciences. This would include knowledge about, for example, how memory processes unfold or how cognitive load may impact a student’s ability to learn unfamiliar information (Sweller, 1988). While there is no debate over the importance of content knowledge in teaching and learning, there is some question regarding whether sufficient attention is devoted to the psychological science on learning that underpins pedagogical knowledge (Cuevas, 2019; Willingham 2019). Certainly, the curricula for those who are preparing to be university instructors in any field, e.g., physics, history, etc., focus predominantly on content knowledge, as PhD programmes are expected to produce professors with the highest-level of content knowledge possible. In contrast, programmes that prepare K-12 teachers attempt a greater balance with more attention paid to how to teach material given the developmental level and background of the learner. However, research over previous decades has shown that many educators maintain beliefs that can be classified as cognitive myths (Kirschner & van Merriënboer, 2013; Rogers & Cheung, 2020; Rousseau, 2021). One large study (N = 3877) found that neuromyths, or misconceptions about education and learning, were prevalent among educators, though the effect was moderated by higher levels of education and having exposure to peer reviewed science (Macdonald et al., 2017).   2. Literature Review A recent systematic review concluded that nearly 90% of educators maintained a belief in neuromyths about learning styles (Newton & Salvi, 2020), and other research has found that neuromyths in general may be particularly resistant to revision among those in teacher education programmes (Rogers & Cheung, 2020). Research also suggests the belief in neuromyths among educators to be so prevalent that interventions are now being proposed to correct those misconceptions (Rousseau, 2021). If misconceptions about how humans learn are as widespread as research seems to indicate, then the issue would be twofold: instructors would likely be incorporating ineffective methods into their lessons due to the mistaken assumption that such methods will benefit student learning, while they also bypass the use of strategies shown to be effective. Thus, in practice, ineffective methods would displace the instructional strategies known to enhance learning (Rohrer & Pashler, 2010). Therefore, we reasoned that an understanding of pedagogy based on learning science would be reflected by two characteristics: 1) whether faculty were familiar with well-established learning concepts, and 2) whether they held misconceptions regarding how students learn. In this study we documented the level of pedagogical knowledge of faculty at a large state university in the United States with a teaching emphasis by assessing their understanding of basic learning concepts and misconceptions about learning. A second, yet essential component of this research, was to measure faculty’s level of metacognitive awareness of their pedagogical knowledge, as an understanding of their own knowledge of how learning occurs is likely to influence their willingness to search out more effective practices (Pennycook et al., 2017). Learning Concepts We identified ten learning concepts, five of which were well-established by research as beneficial to learning and five of which research has debunked and would be classified as myths or misconceptions. In considering the different learning concepts, we selected concepts for which there would be little debate among learning scientists regarding their efficacy. Many options were not included because the extent of their effectiveness may depend on nuances regarding their delivery, and as such may be effective in some circumstances but not in others. For instance, the use of manipulatives shows much promise for students in certain content areas and of certain ages but is not necessarily universally beneficial in all circumstances (Willingham, 2017). The ten we chose were applicable to all content areas and age groups. The following sections constitute succinct summaries of what we deem to be the current research consensus on each of the learning concepts included in the study. More in-depth explanations of each concept can be found in Cuevas, et al., 2023.   Myths and Misconceptions Multitasking  Multitasking is defined as the ability to manage and actively participate in multiple actions simultaneously (Lee & Taatgen, 2002; Wood & Zivcakova, 2015), and a critical analysis of multitasking relies on a framework for working memory and cognitive load (Sweller, 2011). Working memory is an individual’s capacity to store and manipulate information over a short period of time (Just & Carpenter, 1992) and is described as a “flexible mental workspace” (Alloway et al., 2006, p. 1698); cognitive load refers to the capacity for processing information imposed by working memory (Sweller, 2011). As learners manipulate and process information within working memory, the cognitive load bearing increases. Generally, if the learning environment has too many unnecessary components, this would create extraneous cognitive load, negatively affecting the capacity of a student to learn. Mercimek et al.’s (2020) findings showed multitasking actually impedes students’ learning. Other research indicated that engaging in multitasking while completing academic tasks had a negative impact on college GPA (Junco & Cotten, 2011), exam grades (Sana et al., 2012), and class grades (Demirbilek & Talan, 2017; Zhang, 2015), suggesting an impairment in cognitive processing. Wang and Tchernev (2012) also note that multitasking results in diminished cognitive performance as memory and cognitive load are taxed by competing stimuli, negatively affecting outcomes; therefore, encouraging multitasking is likely to be detrimental to students.  Learning Styles The notion that individuals have learning styles and that tailoring instruction to these modalities can enhance students’ learning is among the most persistent cognitive myths (Kirschner & van Merriënboer, 2013; Riener & Willingham, 2010; Torrijos-Muelas et al., 2021), one that can have detrimental effects on students (Scott, 2010). Decades after learning styles-based instruction found wide use in educational settings across a wide spectrum of grade levels and in many countries, exhaustive reviews suggest that nearly all available research evidence has indicated that learning styles do not exist and that adapting instruction to them has no educational benefits (Coffield et al., 2004; Cuevas, 2015, 2016a; Pashler et al., 2009; Rohrer & Pashler, 2012). Well-designed experiments testing the hypothesis have continually debunked the premise, concluding that teaching to learning styles does not enhance learning (Cuevas & Dawson, 2018; Rogowsky et al., 2015, 2020). The persistent myth of learning styles has been identified as substantial problem in education that impacts teacher training and the quality of instruction across k-12 and college classrooms (Cuevas, 2017) with some researchers recently exploring ways to dispel such detrimental neuromyths (Nancekivell et al., 2021; Rousseau, 2021). Digital Natives  Recent generations (e.g., “Millennials” and “Gen Z”) have been characterised as being inundated with technology since birth, and as a result, Prensky (2001) suggested that these learners were digital natives who may learn best when teachers integrate technology into instruction. However, researchers have concluded that these claims are not grounded in empirical evidence or informed by sound theoretical perspectives (Bennett et al., 2008; Jones et al., 2010). Margaryan et al. (2011) similarly found no evidence that this generation of learners had differing learning needs. Kirschner and De Bruyckere (2017) contend that the concept of digital natives is a myth as these learners often do not utilise technological tools effectively and the use of technology can actually adversely affect knowledge and skill acquisition. Wang et al. (2014) concluded that while students may know how to use technology fluidly within the context of entertainment or social media, they still need guidance from teachers to use technology to support learning. Both Smith et al. (2012) and Hite et al. (2017) came to the conclusion that modern students must be taught to use technology for learning purposes just as previous generations were, and its use does not come naturally in learning environments. Prensky (2012) later revised the concept of digital natives, acknowledging that the premise lacks empirical support. Ultimately, education research suggests that the concept of students being digital natives lacks merit. Pure Discovery Learning In their purest form, discovery learning and other similar methods of instruction such as inquiry learning and project-based instruction are designed to focus on maximum student involvement with minimal guidance from the instructor (Clark et al., 2012; Mayer, 2004). Despite the popularity of such approaches, decades of empirical research have shown that minimally guided instruction is not effective at enhancing student performance (Mayer, 2004) and is not consistent with the cognitive science on human learning (Sweller et al., 1998). Learning is defined by change occurring in long-term memory (Kirschner & Hendrick, 2020) which could encompass higher order processes and updates to episodic, semantic, and procedural memory (Cuevas, 2016b). But because students are tasked with navigating unfamiliar territory on their own during discovery-type learning, such minimally guided instruction places a heavy burden on working memory (Kirschner et al., 2006), leaving fewer cognitive resources available to contribute to encoding new information into long-term memory. Cognitive Load Theory suggests that effective instructional methods should decrease cognitive load and that approaches that instead tax working memory and increase cognitive load, as unguided methods do, result in subsequently less learning (Kirschner & Hendrick, 2020; Sweller, 2016).  Extrinsic Motivation  According to Deci et al. (1991) motivation within an education context entails student interest, capacities, and a sense of valuing learning and education. Motivational tendencies can be described as either intrinsic or extrinsic. Educational motivation can manifest in students completing tasks because of curiosity, awards, interest in a topic, approval from a parent or teacher, enjoyment in learning a new skill, or receiving a good grade (Ryan & Deci, 2000). Educators must attempt to foster students’ motivation which may result in extrinsic strategies, such as rewards to promote learning (Ryan & Deci, 2009). However, the use of extrinsic motivators can negatively affect students’ motivation if learning is contingent on rewards (Deci et al., 2001; National Research Council, 2018). Instead, teachers should focus on fostering intrinsic motivation by helping students develop academic goals and monitor learning progress while encouraging autonomy and choice (National Research Council, 2018). Gill et al. (2021) found a positive relationship between intrinsic motivational factors and the development of goals and well-being, suggesting that educators should focus on intrinsic motivation as a basis for learning. Vasconcellos et al. (2020) concluded that external motivators were negatively associated with adaptive outcomes and positively associated with maladaptive ones. Decades of research on extrinsic rewards suggest that they do not support learning or healthy motivational behaviours long term, and thus, intrinsic motivational factors should supplant them to promote learning. Established Learning Principles Retrieval Practice and the Testing Effect Research has clearly demonstrated that having students engage in retrieval practice, when they are tasked with attempting to retrieve learned information from memory, improves long-term retention (Roediger & Butler, 2011; Roediger & Karpicke, 2006; Rowland, 2014). Meta analyses indicate that the use of retrieval practice is more effective than re-studying for both simple and complex information (Karpicke & Grimaldi, 2012; McDaniel et al., 2011). Retrieval practice often takes the form of practice tests and quizzing, and even just a single retrieval session is sufficient to stimulate stronger retention of information than not engaging in testing at all. More than a century of empirical research on what is known as the testing effect has consistently indicated that the use of practice tests, either as a classroom activity or a form of studying, promotes increased learning and retention compared to more commonly used study strategies (Roediger & Karpicke, 2006). Meta analyses have found that retrieval practice tends to produce the strongest effects in mathematics but that it impacts learning across all content areas, and its positive effects are intensified when students are provided with feedback in response to the practice tests (Bangert-Drowns et al., 1991). Practice testing also produces substantial benefits in enhancing students’ free recall and long-term retention while reducing forgetfulness (Roediger & Karpicke, 2006). Dual Coding Decades of research have firmly established that pairing images with verbal or textual information assists learning and retention of information. This process is explained by Dual Coding Theory, a concept Paivio pioneered in 1969 and expanded upon in 1986. The theory asserts that humans have two separate cognitive systems for processing information, one verbal and one visual, and that when the two are combined there is an additive effect that allows for greater retention than would be possible with just one of the systems being incorporated (Clark & Paivio, 1991; Cuevas, 2016a; Kirschner & Hendrick, 2020). The two systems are interconnected but are functionally independent, an important feature because if the two systems did not function independently, cognitive overload would result, as it often does as a consequence of excessive stimuli. Instead, cognitive load remains low when images are used to supplement linguistic information and memory is enhanced due to there being two storage systems. Indeed, Cognitive Load Theory, later developed by Sweller (Kirschner & Hendrick, 2020; Sweller, 1988) relies heavily on Dual Coding Theory. Neurological research has provided evidence for the processes that allow for images to enhance memory (Di Virgilio & Clarke, 1997; Fiebach & Friederici, 2003; Welcome, et al., 2011). Additionally, a great deal of experimental research from the field of cognitive psychology has documented the benefits of dual coding on human learning and its potential for use in educational settings (Cuevas & Dawson, 2018; Hodes, 1998; Sadoski, et al. 1995; Sharps & Price, 1992; Wooten & Cuevas, 2024). Summarisation  The effectiveness of summarisation is linked to metacognition and self-regulation, two essential components of learning (Day, 1986; Leopold & Leutner, 2012). Kintsch and van Dijk (1978) and Brown and Day (1983) proposed important characteristics for summarising or condensing information, specifically, deleting irrelevant information, identifying pertinent information or specific supporting ideas, writing cohesive statements or topic sentences for each idea, and reconstructing ideas. These features constitute a frame for students to translate learned information into their own words (Westby et al., 2010). They are related to evidence-based strategies such as K-W-L charts (Carr & Ogle, 1987), drawing concepts (Leopold & Leutner, 2012), concept maps (Chang et al., 2002), and the use of headings to structure comprehension (Lorch & Pugzles Lorch, 1996). Summarisation strategies have been shown to be particularly helpful to students of varying age groups and ability levels in building comprehension (Hagaman et al., 2016; Shelton et al., 2020; Solis et al. 2011; Stevens et al., 2019), a skill that is vital for student success (Bogaerds-Hazenberg et al., 2020; Perin et al., 2017). Ultimately, decades of research indicate that summarisation assists students in processing information into a condensed structure and is an effective strategy in supporting reading comprehension and developing content knowledge for students. Direct Instruction Teacher-centred strategies such as direct instruction have tended to lose support in pedagogical circles as student-centred forms such as discovery learning have become more popular (Clark, et al. 2012). Yet direct instruction has long been established as being among the most effective forms of instruction (Kirschner & Hendrick, 2020). The method is comprised of well-supported strategies such as the teacher activating background knowledge and fostering introductory focus to start lessons, the modelling of skills and processes for students, providing well-structured explanations which include chunking of information into manageable portions, and guiding independent practice after students have had sufficient support and have become familiarised with the concepts. Each of these are aspects of successful scaffolding, and each is strongly supported by research in cognitive science and educational psychology (Rosenshine, 2012). The evidence for the effectiveness of these different features comprising direct instruction is so thorough that throughout the American Psychological Association’s top 20 principles of teaching and learning (2015) there are sections dedicated to specific components of direct instruction. Additionally, two comprehensive meta-analyses captured the extent of research support for the method. One found consistently significant and positive effects on student learning across 400 studies over 50 years covering all subject areas (Stockard et al. 2018). Another compared the effects of student-centred approaches and direct instruction based on studies across a ten-year period and concluded that the positive effects of direct instruction employing full teacher guidance were far greater than student-driven approaches (Furtak et al. 2012). Spacing Spacing, or distributed learning, occurs when an instructor or student intentionally inserts time intervals between learning sessions based on the same content. Introducing time intervals between study sessions results in stronger retention than massing practice and limits forgetting (Cepeda et al., 2009; Latimier et al., 2021). Research has shown distributed learning to be effective across many different domains, populations, age groups, and development levels, in each case resulting in substantial improvements in long-term retention (Carpenter et al., 2012; Larsen, 2018; Seabrook et al., 2005). Kirschner and Hendrick (2020) argue that distributed practice is among the most well-established procedures for enhancing learning. In one large, well-designed study, Cepeda et al. (2008), concluded that optimal retention occurred when learners studied information on multiple occasions with gaps between different study periods and tests administered at different time intervals. Dunlosky et al. (2013) noted that while spacing is not consistently or intentionally practiced in formal learning environments, the practice has high utility value due to its consistent benefits to students across such a wide range of variables. Rohrer and Pashler (2010) contend that while there is extensive research evidence supporting the effectiveness of using spacing as an instructional strategy, unfortunately, relatively little attention is devoted to its use in practice. Current Study The principal purpose of this study was to assess the pedagogical knowledge of faculty at a large state university in the U.S. with a teaching emphasis, specifically their knowledge of practices for which there is abundant research evidence for or against. Faculty at research universities primarily focus on research output, whereas faculty at teaching universities devote the majority of their time and effort to delivering instruction. Thus, it seemed logical to assess faculty’s knowledge of teaching at a university where teaching, and therefore pedagogy, is the greater emphasis. Additionally, because the field of education is predominantly concerned with pedagogy, ostensibly, education professors would be expected to show the strongest understanding of these concepts, though faculty from all departments within the university were assessed.  A secondary purpose of the study was to gauge professors’ metacognitive awareness by ascertaining their confidence levels in their pedagogical knowledge and whether their self-assessments aligned with their actual level of knowledge. The implication is that if faculty showed high confidence but low knowledge and therefore low levels of metacognition, they would be unaware of their misconceptions. As a result, such faculty would be unlikely to seek out professional development opportunities or investigate approaches that may ultimately improve their instruction. If, on the other hand, faculty showed stronger metacognition, with low confidence and also low knowledge, they would likely be more willing to engage with sources to improve the delivery of their content because they would be aware of their limitations in that regard. Finally, if they showed strong metacognition with high confidence and high levels of knowledge, this would be the ideal scenario and should result in favourable learning outcomes as long as the faculty also had sufficient understanding of content and socio-cultural awareness.    The study was guided by the following research questions: Do faculty members show a strong understanding of well-established concepts regarding learning as established by cognitive science? Which learning concepts do faculty show the most misunderstanding of (i.e., which myths do they tend to endorse or which established concepts do they tend to reject)? Are there differences in the level of knowledge of learning practices between faculty members from different disciplines? For instance, do faculty from education score significantly higher than faculty in other areas in this regard? Are there faculty from certain content areas who show a prevalence for believing in myths or for not being aware of established learning principles? Are there differences in the level of knowledge of learning practices between faculty members according to rank, i.e., university experience? Do faculty show metacognitive awareness of their level of knowledge of teaching and learning practices?   3. Methodology Contextual Factors and Participants  Data were collected from faculty at a midsized public university comprised of five campuses in the southeastern United States with a total enrolment of approximately 18,000 students and 980 faculty at the time of data collection. The student-to-faculty ratio is calculated to be 18:1, and 74% of faculty are full-time, indicating a low proportion of adjunct faculty. According to the Carnegie Classification system, the university is classified under “Master's Colleges & Universities: Larger Programs”. The vast majority of the students enrolled and the degrees the university confers are at the undergraduate level, but the institution also offers master’s and doctoral degrees.  The institution contains a traditional composition of colleges for a state university: Arts and Letters, Business, Education, Health Sciences and Professions, Science and Mathematics, and University College (interdisciplinary studies). The participants were comprised of full-time faculty (N = 107) from each college at the university. The breakdown of responses by college were approximately proportional to the relative size of each college (n = 37, n = 7, n = 14, n = 6, n = 31, n = 4, respectively, with eight declining to identify their college). Respondents were evenly distributed according to rank: Professors (n = 26), Associate Professors (n = 29), Assistant Professors (n = 28), Lecturers/Instructors (n = 22), with 79% being tenured or tenure track and two declining to specify. Of all faculty responding to the broader survey (N = 186), 54.3% identified as women, 39.8% identified as men, 2.2% identified as non-binary or non-conforming, and 3.7% chose not to answer. Data on age were not available.  Design The study used a non-experimental, cross-sectional design that primarily relied on group comparison for the essential analyses. Data were analysed quantitatively using descriptive statistics, analysis of variance, and Pearson correlation. The study did not include an intervention, and data were collected during a single time point. Though data were collected via a survey instrument, an objective assessment of knowledge was used as the primary measure instead of dispositional constructs that would more commonly be the focus of a survey.  Instrument The Diverse Learning Environments (DLE) survey was distributed by the Higher Education Research Institute (HERI) electronically to students and faculty across all five campuses. The DLE is administered by the UCLA HERI to colleges and universities across the United States (HERI, 2015) and was designed to assess stakeholders’ perceptions on constructs such as institutional climate, satisfaction with the academic and work environment, institutional diversity, individual social identity, intergroup relations, and discrimination. A more detailed description of the DLE and related variables can be found in Dawson and Cuevas, 2019. For this study we analysed the variables related to the faculty portion of the survey. The faculty section of the DLE used for this study was comprised of 95 items, including questions about demographic variables, rank and tenure status, course information, involvement in activities such as research and scholarship, instructional strategies, technology, DEI, COVID-19, satisfaction with workplace environment, and salary. The majority of the items utilised a Likert scale, with some Yes/No response items and accompanying follow-up questions. The final 35 items were comprised of “local optional questions” which were added to the survey by the institution beyond the original DLE items in order to address specific questions of interest. The items used for this study were included in the local optional questions and were grouped into two constructs: pedagogical knowledge items or confidence in pedagogical knowledge. The design, selection, and scoring of the items are discussed below. Pedagogical Knowledge Items To assess pedagogical knowledge, a 10-item scale was created. The scale consisted of five common myths and misconceptions about learning and five learning strategies that have been well-established by research. The five myths entailed the following concepts: learning styles, discovery learning, the efficacy of fostering extrinsic motivation, the efficacy of multitasking, and the existence of digital natives. The five well-established learning concepts consisted of the following: dual coding, summarisation, practice testing, direct instruction, and spacing. The rationale was that pedagogical knowledge could be assessed through two general propositions - how many misconceptions an educator held about learning and how many effective approaches they were aware of. The items were limited by two main factors: 1) Due to the length of the overall survey we were only able to insert a small number of additional items because of concerns over time restraints and the respondents’ likelihood of answering a lengthy questionnaire, and 2) there needed to have been clear and well-established research evidence for or against each item with each concept as close to being “settled” science as possible. Concerning this second factor, there are still many learning concepts under scrutiny or that may show efficacy in some circumstances but not others, and if this was the case, then we could not consider them. To ensure that the format of the items was consistent with the rest of the survey items on the DLE, they were presented on a 4-point Likert scale, from “Strongly Disagree” to “Strongly Agree”. However, the responses were scored dichotomously as either correct or incorrect. For instance, for the effective learning approaches, if respondents agreed that they were effective by answering either “agree” or “strongly agree”, they were credited with answering the item correctly. If they answered “disagree” or “strongly disagree” they were credited with an incorrect answer. Alternately, for the myths and misconceptions, if respondents agreed or strongly agreed that they were effective, then that was treated as an incorrect answer, whereas if they disagreed or strongly disagreed that they were effective, then this was scored as a correct answer. The scale can be found in Appendix A.  Confidence in Pedagogical Knowledge Scale  In psychological and educational research, metacognition or metacognitive awareness has traditionally been measured by gauging one’s confidence in their knowledge, skills, or expertise on a topic via a survey and then assessing them through an objective test to ascertain whether the two are positively correlated (Anderson & Thiede, 2008; Dunlosky et al., 2005; Thiede et al., 2003). If the individual’s self-assessment positively correlates with their actual knowledge level, then this would indicate strong metacognition. For instance, if the person rated their own knowledge as high and they scored high on the objective assessment, they would have shown good metacognitive awareness. Likewise, if they rated their own knowledge as low and scored low on the assessment, this would also show good metacognitive awareness because the individual would have provided an accurate self-assessment and would be aware that they were not knowledgeable of the subject. In contrast, an individual would show poor metacognition if there was a negative correlation or no correlation between their self-assessment and the objective assessment. It could be that the person assessed themselves as having low levels of knowledge but actually scored high on the assessment, which may be the case if the individual suffered from anxiety or was simply unsure of their own ability. In this example the individual underestimated their knowledge or ability. But the more common form of a lack of metacognitive awareness would be when a negative correlation between the two measures occurred due to the individual overrating their own knowledge on the self-assessment but scoring poorly on the objective assessment on the topic. In essence, they would have believed themselves to be highly knowledgeable while in reality their knowledge level was low. They did not recognise their own lack of knowledge due to having a limited understanding of the field. This is a very common research finding known as the Dunning-Kruger effect (Kruger & Dunning, 1999; Pennycook et al., 2017) wherein a person overestimates their own competence and is unaware that they lack sufficient knowledge of a subject. When individuals contend that they have high levels of knowledge that they actually do not, it is known as overclaiming (Atir, et al., 2015; Pennycook & Rand, 2019). To assess metacognitive awareness regarding faculty’s knowledge of pedagogical practices and human learning, we developed a five-item confidence scale. These items asked about familiarity with research on learning and the influence this has on their instruction, confidence in their knowledge of best practices, confidence in their use of best practices, and their familiarity with pedagogical practices compared to their peers. Again, these items were presented on a 5-point Likert scale. In this case, a composite score was derived for each faculty member that represented a self-assessment of their familiarity and knowledge of best practices in regard to student learning. This score was then used to conduct a Pearson correlation between confidence level in pedagogical knowledge and actual pedagogical knowledge as determined by the scores on the pedagogical knowledge items to calculate the level of metacognitive awareness of faculty in regard to learning concepts. When tested for reliability, the Cronbach’s alpha coefficient for this scale was .79. The scale can be found in Appendix B.   4. Results Descriptive Statistics for Pedagogical Knowledge by College and Concept In order to address the first research question pertaining to the faculty members’ overall understanding of the 10 pedagogical concepts, a mean score was tabulated for all respondents (N = 107). Individual scores could range from 0, if the respondent answered all questions incorrectly, to 10, if all questions were answered correctly. A correct answer represented agreement that well-established research-based pedagogical concepts were effective or when the respondent disagreed that the myths or misconceptions were effective learning practices. Values were reverse-scored for negatively coded items. The mean score across all faculty on the pedagogical knowledge items was slightly above the midpoint of the scale (M = 6.11). This result does not indicate strong pedagogical knowledge and reveals that faculty answered fewer than 65% of the items correctly on average. Below in Table 1, the descriptive statistics with the mean scores for the pedagogical items of faculty are organised by the participants’ affiliated college. College N Mean Std. Dev Std. Error Arts and Letters 37 6.38 1.77 0.29 Business 7 6.00 1.00 0.38 Education 14 5.93 2.06 0.55 Health Sciences  6 6.33 0.82 0.33 Sciences & Math 31 6.00 1.73 0.31 University College 4 6.00 1.83 0.91 Unidentified 8 5.63 1.30 1.30 Total 107 6.11 1.67 0.16 Range of possible scores: 0 – 10 Table 1. Pedagogical Knowledge Scores by College   The second question addressed the respondents' knowledge of effective learning practices in terms of which learning concepts they demonstrated the most and least understanding of. Across faculty from all colleges, the myth or misconception items that instructors scored most poorly on were learning styles (33% correct) and discovery learning (33% correct), with two-thirds of respondents indicating they believed those practices to benefit learning. Additionally, less than half the faculty answered the questions correctly on multitasking (40.6% correct) and digital natives (43.8% correct). While faculty were not effective at identifying misconceptions related to pedagogy, they were more accurate in identifying effective approaches with nearly all faculty reporting that direct instruction (97.2% correct) and practice testing are beneficial to students (94.4% correct). Furthermore, faculty recognised the importance of spacing (85.7% correct) and, to a lesser extent, summarisation (66.7% correct) and dual coding (65.4% correct). The full breakdown of correct responses by learning concept across all faculty and according to college can be found in Table 2 below. Of note, data in Table 1 are based on means calculated for each college even if a faculty member did not complete all the items, but data in Table 2 required the faculty member to complete all subscales. Because the College of Arts and Letters and the College of Science and Math had several faculties who did not complete each item, the sample size in Table 2 was reduced by 4. Faculty from the College of Education, who should show the most expertise regarding pedagogy, only scored higher than faculty in other colleges regarding myths and misconceptions about one concept. They were more likely to recognise that unguided discovery learning is ineffective (57.1% correct). College of Education faculty scored similarly to faculty from other colleges on all the other concepts, both for the myths and misconceptions and the effective practices. Additionally, College of Education faculty were split regarding the effectiveness of summarisation for students’ learning (50% correct). Faculty across colleges were largely able to identify effective practices, such as retrieval and direct instruction, but scored somewhat lower regarding dual coding and summarisation. Learning Concept All Faculty (N =103) Arts & Letters (N = 36) College of Business (N = 7) Education (N = 14) Health Science (N = 6) Science and Math (N = 29) University College (N = 4) Learning Styles (M/M) 33.0 44.4 14.3 28.6 33.3 31.0 25.0 Discovery Learning (M/M) 33.0 27.8 28.6 57.1 33.3 25.8 50.0 Extrinsic Motivation (M/M) 60.2 66.7 42.9 64.3 50.0 55.2 50.0 Multitasking (M/M) 40.6 40.5 28.6 35.7 50.0 51.6 0.00 Digital Natives (M/M) 43.8 55.6 14.3 35.7 66.7 40.0 50.0 Dual Coding (EP) 65.4 73.0 71.4 64.3 50.0 58.1 50.0 Summarisation (EP) 66.7 64.9 100 50.0 66.7 69.0 100.0 Retrieval Practice (EP) 94.4 89.2 100.0 92.9 100.0 96.8 100.0 Direct Instruction (EP) 97.2 97.3 100.0 92.9 100.0 96.7 100.0 Spacing (EP) 85.7 83.8 100.0 71.4 83.3 96.6 75.0 *(M/M) = myth/misconception *(EP) = effective practice Table 2. Correct Responses on Each Learning Concept Across All Faculty and By College   Pedagogical Knowledge by Academic Discipline/College For the third research question, we sought to answer whether there were differences in pedagogical knowledge regarding the 10 concepts according to academic field. In particular, we were interested in determining whether education faculty showed stronger pedagogical knowledge compared to faculty from other fields since pedagogy is the core content of education professors. An examination of the descriptive statistics for the mean scores by college (see Tables 1 and 2 above) revealed the mean scores of faculty from the various colleges were between 5.63 and 6.38 out of a possible 10. There was no college where faculty averaged more than 65% correct answers. Additionally, only the subset of faculty who chose not to identify their college scored lower in pedagogical knowledge than those from the College of Education. Faculty from all other colleges outperformed education faculty regarding their level of pedagogical knowledge.  To ascertain whether there were statistical differences in pedagogical knowledge according to academic field, faculty were grouped by college (e.g., Arts and Letters, Business, Education, etc.). ANOVA analyses were conducted with scores from the pedagogical knowledge scale as the dependent variable. For all ANOVA analyses, equal variances across groups were assumed, as the assumption of homogeneity of variance was not violated in any of the ANOVA models. Results found no statistically significant differences in pedagogical knowledge between faculty from different colleges overall, F (5, 93) = 0.253 p = .937, η2 = .013. Thus, while an examination of the descriptive statistics revealed that education faculty did not outperform faculty from other areas, inferential analysis indicated that all faculty from across the university scored similarly regarding pedagogical knowledge across all items.  To extend the analyses, we sought to determine whether there were differences in pedagogical knowledge according to academic field regarding either the myths and misconceptions or the effective practices. ANOVA analyses revealed that there were no differences by college in either the myths/misconceptions, F (5, 93) = 1.428, p = .221,  η2 = .071 or the effective practices, F (5, 93) = 1.700, p = .142, η2 =.084. Therefore, faculty from all colleges demonstrated similar levels of knowledge of both myths/misconceptions and effective practices. These results were surprising because faculty from the College of Education would be expected to score higher in pedagogical knowledge for both myths/misconceptions and effective practices since pedagogical knowledge is central to their field. Yet this was not the case. There would be no reason to expect faculty from the other colleges to perform better or worse than any other college since pedagogical knowledge generally does not fall directly within their field of study or expertise. Pedagogical Knowledge by Rank Also of interest was whether faculty performed differently regarding pedagogical knowledge according to academic rank, which may be considered a reflection of teaching experience. Respondents were grouped by rank (i.e., instructor, lecturer, assistant professor, associate professor, professor), and ANOVAs were conducted with scores on overall pedagogical knowledge, myths/misconceptions, and effective practices as dependent variables. Only full-time faculty were included in the sample. Significant main effects by rank were revealed for overall pedagogical knowledge, F (4,100) = 3.020, p = .021, η2 = .108, and for the myths/misconceptions, F (4, 100) = 2.836, p = .028, η2 = .102, but not for the effective practices, F (4, 100) = 1.455, p = .222, η2 = .055.  For overall pedagogical knowledge, LSD post hoc analyses identified that professors (p = .008), associate professors (p = .019), and lecturers (p = .044) scored significantly higher than assistant professors. Furthermore, professors scored significantly higher than lecturers (p = .038). The LSD post hoc analyses for myths/misconceptions revealed that professors were better able to correctly identify myths significantly more than assistant professors (p = .008) and instructors (p = .015).  Full and associate professors outperformed less experienced professors, potentially due to having more expertise and more years of experience in instruction on average. Descriptive statistics regarding rank may be found below in Tables 3, 4, and 5. Note that in Table 3 the possible score was between 0 and 10, whereas in Tables 4 and 5, the possible score was between 0 and 5. For myths and misconceptions, the scores for faculty at all ranks fell below the 50% mark (M = 2.5) except for professors, who scored just above that (M = 2.55). College N Mean Std. Dev Std. Error Full Professors 26 6.58 1.27 0.25 Associate Profs 29 6.41 2.13 0.40 Assistant Profs 28 5.39 1.47 0.28 Lecturers 18 6.39 1.42 0.33 Instructors 4 4.75 0.50 0.25 Total 105 6.11 1.68 0.16 Table 3. Pedagogical Knowledge scores by Rank Overall   College N Mean Std. Dev Std. Error Full Professors 26 2.55 0.50 0.10 Associate Profs 29 2.43 0.41 0.08 Assistant Profs 28 2.25 0.38 0.07 Lecturers 18 2.44 0.39 0.09 Instructors 4 2.00 0.16 0.08 Total 105 2.40 0.43 0.04 Table 4. Pedagogical Knowledge Scores by Rank for Myths and Misconceptions   College N Mean Std. Dev Std. Error Full Professors 26 3.01 0.32 0.06 Associate Profs 29 2.89 0.22 0.04 Assistant Profs 28 2.89 0.29 0.05 Lecturers 18 3.00 026 0.06 Instructors 4 3.10 0.38 0.19 Total 105 2.95 0.28 0.03 Table 5. Pedagogical Knowledge Scores by Rank for Effective Practices   Metacognitive Awareness of Pedagogical Knowledge Of particular interest was whether faculty demonstrated metacognitive awareness of their own levels of pedagogical knowledge. As noted in the method section above, metacognition, or metacognitive awareness, has typically been measured by assessing respondents’ level of confidence in their own knowledge or ability, then assessing them on an objective test of that knowledge or ability and conducting a correlational analysis to determine whether their self-beliefs correspond with actual knowledge or performances levels on the constructs. For this analysis, we conducted Pearson correlations utilising the faculty’s scores on the Confidence in Pedagogical Knowledge Scale and their scores on the Pedagogical Knowledge Items assessment to test for a relationship between the two.  When measuring the metacognitive awareness of pedagogy for all faculty who participated, a Pearson correlation revealed a weak non-significant negative correlation, r (107) = -.157, p = .105. Accurate metacognitive awareness is only shown when there is a significant positive correlation between one’s confidence or views of their own knowledge or ability and one’s actual levels in those areas. A negative correlation or no correlation (non-significant) would indicate poor metacognition in that the respondents’ views of their own knowledge or abilities do not correspond with their actual levels in the areas. That was the case across all faculty in the study. In regard to faculty in the College of Education, who we hypothesised would have greater pedagogical knowledge and awareness of their own levels of expertise in the area, we again found no correlation between their self-reported level of expertise and their actual level of pedagogical knowledge, r (14) = .003, p = .992. We also tested for differences in metacognitive awareness between faculty based on academic rank; however, no significant differences were found on this variable, with more experienced faculty showing no greater awareness in this regard than newer faculty.  Overall, faculty reported high confidence that their teaching was heavily influenced by their familiarity with effective teaching practices (81.2% agreement) and did not believe that other faculty members had better knowledge of those practices (87.7% agreement). While faculty did show recognition of effective practices, they also endorsed myths and misconceptions regarding pedagogy, wherein they scored the poorest.   5. Discussion The goals of this study were to assess pedagogical knowledge, as a function of both effective practices and misconceptions, of faculty at a large teaching-centred state university and to gauge their metacognitive awareness regarding their own instructional practices. In addition, we sought to discern whether these outcomes varied according to academic discipline, as represented by college, and by academic rank, which often can serve as a proxy for experience in higher education.  The data present a picture of faculty who tended to characterise all the pedagogical approaches they were presented with as being effective, regardless of whether those approaches were myths or misconceptions or were actually effective strategies. This resulted in a dynamic in which faculty correctly classified effective practices as being beneficial to learning but also incorrectly endorsed myths and misconceptions. This suggests that faculty at the university in this study are often incorrect in their assumptions regarding ineffective practices and mistakenly believe practices debunked by research benefit students. This finding is consistent with recent research indicating that educators continue to report holding beliefs in neuromyths despite a wealth of evidence to the contrary (Macdonald et al., 2017; Newton & Salvi, 2020; Rousseau, 2021).  Faculty answered the majority of the pedagogical items correctly, with most incorrect answers coming in the form of endorsing myths and misconceptions as effective practices. For example, two-thirds of faculty believed unguided discovery learning and learning styles to be beneficial to student learning, while more than half were incorrect about multitasking and digital natives. The only misconception that the majority of faculty correctly characterised was the use of extrinsic motivators, with most classifying rewards systems as ineffective long-term instructional strategies. For the effective practices, the vast majority of faculty correctly recognised the efficacy of direct instruction, retrieval practice, and spacing, with a smaller majority recognising the effectiveness of summarisation and dual coding. While faculty demonstrated an understanding that certain research-supported strategies are effective, many also believed several of the myths and misconceptions are beneficial. This may indicate that instructors are not successfully differentiating between effective and ineffective practices, a finding consistent with prior research (Rohrer & Pashler, 2010), suggesting that faculty may often unknowingly choose ineffective methods. If this is the case, then it is likely that many unproductive teaching methods may be used in college classrooms as well at the public-school level, where similar findings have emerged (Lethaby & Harries, 2016). The results were somewhat less clear regarding the pedagogical knowledge of faculty according to rank. We did not have specific expectations about how faculty at different ranks would perform. It was possible that higher ranking members may show stronger pedagogical knowledge due to having more experience in instruction, although considering that some lower ranked faculty may have been employed at other universities previously or held multiple postdoc positions prior, higher ranked faculty may not universally have been more experienced. Another possibility would be that newer faculty would be more familiar with learning science than longer tenured ones because they had more recently been exposed to the latest research when completing their doctorates. The former scenario appeared more likely to be the case than the latter, with full professors and associate professors outperforming assistant professors in overall pedagogical knowledge. While faculty of all ranks performed similarly regarding identifying effective instructional practices, full professors were less likely to endorse myths and misconceptions than assistant professors and lecturers. Thus, faculty across ranks appeared to hold similar views regarding effective practices. Still, the more newly hired or less experienced faculty were more likely to endorse myths and misconceptions than tenured faculty. One possibility based on these results is that newly minted Ph.D. holders are not being exposed to accurate learning science in their doctoral programmes and that myths and misconceptions are proliferating at that level. The most notable contributions of this study emerged through analyses resulting in non-significant findings. Surprisingly, College of Education faculty, whose academic discipline is entirely rooted in pedagogy, did not demonstrate better understanding of these research-based, well-established instructional concepts than faculty from other disciplines. With the exception of unguided discovery learning, education faculty were just as likely to endorse myths and misconceptions about learning and no more likely to recognise practices supported by well-established research. In fact, while the difference was not statistically significant, education faculty scored lower in pedagogical knowledge than the faculty from each of the other colleges.  Additionally, faculty across the university showed a lack of metacognitive awareness in regard to their pedagogical knowledge. The absence of a positive correlation between faculty members’ confidence in their knowledge of teaching and learning practices and their actual knowledge of those practices revealed a limited understanding of their own knowledge in the area. In short, being confident in their knowledge of pedagogy was not related to actual high levels of knowledge on the topic. This dynamic was true of faculty from the College of Education as well. Individuals with low levels of knowledge yet high levels of confidence in that knowledge are unlikely to change their views and seek ways to improve their knowledge or performance. In this case, such faculty would be unlikely to improve upon or learn about new instructional techniques over time.  The dichotomy between actual knowledge and confidence in one’s own knowledge was underscored by the preponderance of faculty, nearly 88%, who believed that others at the university did not know more about pedagogical approaches than themselves. In one respect, this demonstrates high self-efficacy in their teaching practices, but it may also be cause for concern. Considering that the vast majority of faculty participants were trained in and taught in fields in which learning science was not central to their discipline, a belief that there were no others at the university with more expertise in pedagogy could lead to circumstances when faculty are disinterested in pursuing more effective methods or learning about emerging research on teaching practices. This situation may mirror that of K-12 education when public school teachers may receive limited instruction in learning science and ultimately default to relying on anecdotal experiences to guide their practice.  This particular dynamic, high levels of confidence paired with low levels of knowledge, represents the well-established Dunning-Kruger effect (Kruger & Dunning, 1999). The Dunning-Kruger effect most commonly appears in respondents with the lowest levels of knowledge on a subject when those with little expertise overclaim and believe their knowledge to be high in a field they have limited familiarity with (Atir, et al., 2015; Pennycook, et al., 2017). One novel contribution of the present study is that the participants cannot be viewed as having the lowest levels of expertise yet still demonstrated what can be consider the Dunning-Kruger effect because confidence far exceeded actual knowledge levels. Faculty almost universally held terminal degrees in an academic field, most likely from national research universities. Therefore, this suggests that the effect does not apply only to those with the lowest levels of knowledge or those whose knowledge is outside their field. In this case, all participants had relatively high levels of experience and some knowledge of the learning sciences. However, it appears that learning science on pedagogy is specialised enough that even those with extensive experience teaching may have limited levels of knowledge of it while at the same time having confidence with their familiarity with the subject. It is important to note that there is no reason to conclude that the issues revealed here are unique or specific to the university that served as the focus of the study, as the participants were not educated at this institution. The vast majority of faculty at the university received their Ph.Ds from what are considered to be high-level research universities, categorised under the Carnegie classification system as “Doctoral Universities: Very High Research Activity”. Thus, it is likely that their cohort members educated at the same universities who secured positions teaching at other institutions would hold similar views. Considering this situation, these results may indicate much wider issues in the preparation of university faculty for teaching purposes. Limitations One limitation of the present study was that the assessment designed to measure knowledge of pedagogical practices was restricted to ten items. There were two primary reasons for this. First, as an extension of a much larger survey instrument, we were limited in the number of items we were able to introduce. It can certainly be argued that ten items are not enough to capture a range of pedagogical practices, without enough breadth to ascertain the full scope of possibilities, though using a mixture of effective practices and myths/misconceptions allowed for more nuanced analyses of pedagogical knowledge.  The second reason for the limited number of pedagogical items was that it was somewhat challenging to identify approaches for which there was an abundance of research evidence to the extent that we could consider the science to be settled and which were not confined to certain developmental levels or content areas. For instance, a practice such as interleaving is well-supported across age ranges, but research has most commonly linked it to math instruction (Rohrer & Pashler, 2010), and it is less clear how it may apply to language arts or history instruction. Likewise, practices like the use of manipulatives have been shown to be effective, but mostly for young children and in somewhat narrow contexts (Willingham, 2017). For these reasons, the measure of pedagogical knowledge was limited to a short list of practices.  Another potential limitation was the question of how settled is settled? Unlike the hard sciences, research in social sciences rarely approaches that level of consensus and there may be continued debate and contradictory findings for decades. Therefore, we chose concepts for which we determined there was the greatest amount of consensus among researchers and the greatest abundance of robust empirical evidence. We chose to draw upon concepts such as those compiled by organisations like the American Psychological Association (APA, 2015) or a compilation of seminal studies in educational psychology (Kirschner & Hendrick, 2020), each of which is supported by hundreds of studies. Therefore, despite academics who may dispute our choices of concepts, we are confident in the validity and reliability of the ones we chose to include, though the criteria limited our options. Additionally, the reliability of the Confidence in Pedagogical Knowledge Scale was not as strong as we would have liked (α = .79). Reliability was further tested by removing each of the five items to ascertain whether the scale proved to be more reliable in any combination as a 4-item scale, but the highest reliability was achieved by including all five items. Future researchers could improve upon this scale by introducing stronger items or replacing ones from the current scale. Nonetheless, while ideally the reliability of the present scale would have been stronger, it was acceptable. A final limitation was that the data were collected from just one large state university and the sample was restricted to 107 faculty members. But as noted above, these faculty members were trained at a wide range of different universities, the majority of them national research universities, so we do not view the results as being limited to a reflection of the one institution that was the focus of the study. However, we recommend that future researchers extend their samples to several institutions at a variety of levels, such as community colleges, comprehensive, and research universities, as well as teachers in K-12 settings. Implications In sum, the data suggest a situation in which faculty do not have a strong understanding of which pedagogical approaches are effective in contrast to those which are not according to learning science, yet they feel relatively confident that they do. Faculty were able to correctly identify effective practices but could not distinguish those from myths and misconceptions, and the widespread endorsement of certain misconceptions such as learning styles and discovery learning indicate that perhaps faculty may not be employing the most efficacious teaching methods. The limited levels of metacognitive awareness further suggests that such faculty members may be unlikely to search out better methods if they are unaware that a variety of concepts they endorse do not benefit learning. It is understandable that those faculty members from the colleges of Arts and Letters, Business, Math, and Sciences would not be as aware of the cognitive science supporting different learning strategies, as their doctoral preparation was focused on promoting the highest levels of knowledge and research connected to their specific discipline. However, the issue is more complex for those in the College of Education. The remedies for faculty from colleges besides Education are straightforward. University programmes that prepare Ph.D. candidates to be college faculty can ensure that they include courses structured to familiarise future faculty with recent research on human learning and how cognitive science informs teaching practices in real-world applications. Additionally, most universities have centres for teaching and learning designed to serve as professional development for faculty, particularly in the area of pedagogy. These centres should endeavour to provide the most recent and accurate information and avoid endorsing concepts shown by learning science to be misconceptions. These two options are often instituted at many universities. Our results suggest that they must be done better. The issues regarding remedies for colleges of education are more daunting. Pedagogy is the central content of these programmes. These colleges of education prepare K-12 teachers, K-12 administrators, district and state-level administrators, and very often university administrators who receive their doctorates in leadership programmes housed in colleges of education. If education professors are not aware of the most well-established teaching and learning methods, then their students will also not become aware of them, including K-12 teachers and administrators at each level. And because nearly all of the education faculty at the university in this study received their Ph.D.s from a range of R-1 institutions, it suggests that the issue may be widespread. State and national accrediting bodies may not sufficiently account for professors’ level of pedagogical knowledge, which may partially explain why misinformation about learning such as learning styles is included in teacher licensure materials in the majority of states (Furey, 2020). Questions have arisen about the efficacy of training provided by colleges of education (Asher, 2018; Goldhaber, 2019), and our results appear to underscore those concerns. These results, if they were replicated in further studies, should compel those in academia to rethink and perhaps overhaul the foundation of colleges of education, in addition to making substantial alterations to accrediting bodies for colleges of education and state boards of education in the U.S. These organisations may not be meeting their primary responsibility of ensuring that their graduates adequately understand teaching and learning practices. It should not be sufficient to simply train teachers to function within a system. They should be taught about what works to enhance student learning in order to become better teachers, but that is not possible if those teaching them at the university level do not have a full understanding of fundamental principles of learning. A first step in the process would be for universities and colleges to acknowledge the potential limitation and modify curricula to ensure that adequate attention was devoted to coursework on how human learning occurs. This would necessitate employing personnel with the necessary expertise to teach such courses. This may entail universities hiring instructors with specific skill sets that may not be prioritised at the moment in order to advance that particular knowledgebase. For existing faculty who may not have had the benefit of coursework in the learning sciences themselves, professional development can be offered. This may consist of learning modules where faculty read and discuss research pertaining to learning outcomes and instructional strategies from fields such as cognitive psychology, neuroscience, educational psychology, and cognitive science. Ideally, these would incorporate the very methods being studied and provide videos and demonstrations of the strategies being used in classroom settings. This is a general overview of initial steps that may be taken, but a key point is that this training should ultimately be introduced to faculty, to those in graduate courses who are likely to one day be faculty or involved in higher education, and to those who are preparing for roles in K-12 education.   6. Conclusion This study, along with a growing body of research, suggests that instructors are not currently being adequately trained in the cognitive science that informs teaching and learning. If myths and misconceptions about learning continue, those instructors will be unlikely to optimise student learning. By acquiring a more comprehensive understanding of learning science, university instructors will have the opportunity to employ teaching practices that have been shown to enhance cognition, such as methods to increase retention of information, problem-solving skills, or procedural knowledge. There is little doubt that their students would benefit from the use of research-based practices. This is especially true of faculty in colleges of education who prepare K-12 teachers and administrators at a variety of levels because an understanding of such approaches could then be transferred to those who would put them to use in classrooms with younger learners. We recommend that universities prioritise an emphasis on learning science to ensure that the candidates they train for teaching positions are aware of effective teaching and learning practices and can distinguish between those and ineffective ones, which should ultimately enhance educational outcomes for all involved.   About the Authors Joshua A. Cuevas ORCID ID: 0000-0003-3237-6670 University of North Georgia [email protected] Bryan L. Dawson University of North Georgia [email protected] Gina Childers Texas Tech University [email protected]   References Alloway, T. P., Gathercole, S. E., & Pickering, S. J. (2006). Verbal and visuospatial short-term and working memory in children: Are they separable? Child Development, 77(6), pp. 1698–1716. https://doi.org/10.1111/j.1467-8624.2006.00968.x. American Psychological Association, Coalition for Psychology in Schools and Education. (2015). Top 20 principles from psychology for preK–12 teaching and learning. Retrieved from www.apa.org/ed/schools/cpse/top-twenty-principles.pdf. Anderson, M. C., & Thiede, K. W. (2008). Why do delayed summaries improve metacomprehension accuracy? Acta Psychologica, 128, pp. 110-118. https://doi.org/10.1016/j.actpsy.2007.10.006. Asher, L. (2018, April 8). How ed schools became a menace: They trained an army of bureaucrats who are pushing the academy toward ideological fundamentalism. The Chronicle of Higher Education. https://www.chronicle.com/article/How-Ed-Schools-Became-a-Menace/243062. Atir, S., Rosenweig, E., and Dunning, D. (2015). When knowledge knows no bounds: Self-perceived expertise predicts claims of impossible knowledge. Psychological Science, 26(8), pp. 1295 – 1303. https://doi.org/10.1177/0956797615588195. Bangert–Drowns, R. L., Kulik, J. A., & Kulik, C. L. C. (1991). Effects of frequent classroom testing. Journal of Educational Research, 85, pp. 89–99. https://doi.org/10.1080/00220671.1991.10702818. Bennett, S., Maton, K., & Kervin, L. (2008). The ‘digital natives’ debate: A critical review of the evidence. British Journal of Educational Technology, 39(5), pp. 775-786). https://doi.org/10.1111/j.1467-8535.2007.00793.x. Bogaerds-Hazenberg, S., Evers-Vermeul, J., & van den Bergh, H. (2020). A meta-analysis on the effects of text structure instruction on reading comprehension in the upper elementary grades. Reading Research Quarterly, 56(3), pp. 435-462. https://doi.org/10.1002/rrq.311. Brown, A. L. & Day, J.D. (1983). Macrorules for summarizing texts: The development of expertise. Journal of Verbal Learning and Verbal Behavior, 22, pp. 1-14. https://doi.org/10.1016/S0022-5371(83)80002-4. Carpenter, S. K., Cepeda, N. J., Rohrer, D., Kang, S. H. K., & Pashler, H. (2012). Using spacing to enhance diverse forms of learning: Review of recent research and implications for instruction. Educational Psychology Review, 24(3), pp. 369–378. https://doi.org/10.1007/s10648-012-9205-z. Carr, E., & Ogle, D. (1987). K-W-L Plus: A strategy for comprehension and summarization. Journal of Reading, 30(7), pp. 626-631. https://eric.ed.gov/?id=EJ350560. Cepeda, N. J., Coburn, N., Rohrer, D., Wixted, J. T., Mozer, M. C., & Pashler, H. (2009). Optimizing distributed practice: Theoretical analysis and practical implications. Experimental Psychology, 56(4), pp. 236–246. https://doi.org/10.1027/1618-3169.56.4.236. Chang, K., Sung, Y. & Chen, I. (2002). The effect of concept mapping to enhance text comprehension and summarization. The Journal of Experimental Education, 71(1), pp. 5-23. https://doi.org/10.1080/00220970209602054. Clark, J., Kirschner, P. A., Sweller, J. (2012). Putting students on a path to learning: The case for fully guided instruction. American Educator, 36(1), pp. 6-11. https://eric.ed.gov/?id=EJ971752. Clark, J. M., & Paivio, A. (1991). Dual coding theory and education. Educational Psychology Review, 3(3), pp. 149-210.  https://doi.org/10.1007/BF01320076. Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning & Skills Research Centre, London.  Cuevas, J. A. (2015). Is learning styles-based instruction effective? A comprehensive analysis of recent research on learning styles. Theory and Research in Education, 13(3), pp. 308-333. https://doi.org/10.1177/1477878515606621. Cuevas, J. A. (2016a). An analysis of current evidence supporting two alternate learning models: Learning styles and dual coding. Journal of Educational Sciences & Psychology, 6(1), pp. 1-13. https://www.researchgate.net/publication/301692526. Cuevas, J. A. (2016b). Cognitive psychology’s case for teaching higher order thinking. Professional Educator, 15(4), pp. 4-7. https://www.academia.edu/28947876. Cuevas, J. A. (2017). Visual and auditory learning: Differentiating instruction via sensory modality and its effects on memory. In Student Achievement: Perspectives, Assessment and Improvement Strategies (pp. 29 – 54). Nova Science Publishers. ISBN-13: 978-1536102055. Cuevas, J. A. (2019). Addressing the crisis in education: External threats, embracing cognitive science, and the need for a more engaged citizenry. In Nata, R.V. (Ed.), Progress in Education (pp. 1 – 38). (Vol. 55). Nova Science Publishers. ISBN: 978-1-53614-551-9. Cuevas, J. A., Childers, G., & Dawson, B. L. (2023). A rationale for promoting cognitive science in teacher education: Deconstructing prevailing learning myths and advancing research-based practices. Trends in Neuroscience and Education. https://doi.org/10.1016/j.tine.2023.100209. Cuevas, J. A., & Dawson, B. L. (2018). A test of two alternative cognitive processing models: Learning styles and dual coding. Theory and Research in Education, 16(1), pp. 40-64.  https://doi.org/10.1177/1477878517731450. Dawson, B. L., & Cuevas, J. A. (2019). An assessment of intergroup dynamics at a multi-campus university: One university, two cultures. Studies in Higher Education, 45(6), pp. 1047 – 1063. https://doi.org/10.1080/03075079.2019.1628198. Day, J. (1986). Teaching summarization skills: Influences of student ability level and strategy difficulty. Cognition and Instruction, 3(3), pp. 193-210. https://doi.org/10.1207/s1532690xci0303_3. Deci, E., Vallerand, R., Pelletier, L., & Ryan, R. (1991). Motivation and education: The self-determination perspective. Educational Psychologist, 26(3&4), pp. 325-346. https://doi.org/10.1080/00461520.1991.9653137. Deci, E., Koestner, R., & Ryan, R. (2001). Extrinsic rewards and intrinsic motivation in education: Reconsidered once again. Review of Educational Research, 7(1), pp. 1-27. https://doi.org/10.3102/00346543071001001. Demirbilek, M., & Talan, T. (2017). The effect of social media multitasking on classroom performance. Active Learning in Higher Education, 19(2), pp. 117-129. https://doi.org/10.1177/1469787417721382. Di Virgilio, G. & Clarke, S. (1997). Direct interhemispheric visual input to human speech areas. Human Brain Mapping, 5, pp. 347-354. https://doi.org/10.1002/(SICI)1097-0193(1997)5:5<347::AID-HBM3>3.0.CO;2-3. Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), pp. 4-58. https://doi.org/10.1177/1529100612453266. Dunlosky, J., Rawson, K. A., & Middleton, E. L. (2005). What constrains the accuracy of metacomprehension judgments? Testing the transfer-appropriate-monitoring and accessibility hypotheses. Journal of Memory and Language, 52, pp. 551-565. https://doi.org/10.1016/j.jml.2005.01.011. Fiebach, C. J. & Friederici, A. D. (2003). Processing concrete words: fMRI evidence against a specific right-hemisphere involvement. Neuropsychologia, 42(1), pp. 62 -70. https://doi.org/10.1016/S0028-3932(03)00145-3.  Furey, W. (2020). The stubborn myth of “learning styles”: State teacher-license prep materials peddle a debunked theory. Education Next, 20(3), pp. 8-12. https://www.educationnext.org/. Furtak, E. M., Seidel, T., Iverson, H., & Briggs, D. C. (2012). Experimental and quasi-experimental studies of inquiry-based science teaching: A meta-analysis. Review of Educational Research, 82, pp. 300 – 329. https://doi.org/10.3102/0034654312457206. Gill, A., Trask-Kerr, K., & Vella-Brodrick, D. (2021). Systematic review of adolescent conceptions of success: Implications for wellbeing and positive education. Educational Psychology Review, 33, pp. 1553-1582. https://doi.org/10.1007/s10648-021-09605-w. Goldhaber, D. (2019). Evidence-based teacher preparation: Policy context and what we know. Journal of Teacher Education, 70(2), pp. 90–101. https://doi.org/10.1177/0022487118800712. Hagaman, J. L., Casey, K. J., Reid, R. (2016). Paraphrasing strategy instruction for struggling readers. Preventing School Failure: Alternative Education for Children and Youth, 60, pp. 43–52. http://dx.doi.org/10.1080/1045988X.2014.966802. Higher Education Research Institute. (2015, October). HERI research brief. https://www.heri.ucla.edu/briefs/DLE/DLE-2015-Brief.pdf. Hite, R., Jones, M.G., Childers, G., Chesnutt, K., Corin, E., & Pereyra, M. (2017). Pre-service and in-service science teachers’ technological acceptance of 3D, haptic-enabled virtual reality instructional technology. Electronic Journal of Science Education, 23(1), pp. 1-34. https://eric.ed.gov/?id=EJ1203195. Hodes, C. L. (1998). Understanding visual literacy through visual informational processing. Journal of Visual Literacy, 18(2), pp. 131-136. https://doi.org/10.1080/23796529.1998.11674534. Jones, C., Ramanau, R., Cross, S., & Healing, G. (2010). Net generation or digital natives: Is there a distinct new generation entering university? Computers & Education, 54(3), pp. 722-732. https://doi.org/10.1016/j.compedu.2009.09.022. Junco, R. & Cotten, S. (2011). No A 4 U: The relationship between multitasking and academic performance. Computers & Education, 59(2), pp. 505-514. https://doi.org/10.1016/j.compedu.2011.12.023. Just, M. A., & Carpenter, P. A. (1992.) A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 99, pp. 122-149. https://doi.org/10.1037/0033-295X.99.1.122. Karpicke, J. D., & Grimaldi, P. J. (2012). Retrieval-based learning: A perspective for enhancing meaningful learning. Educational Psychology Review, 24(3), pp. 401–418. https://doi.org/10.1007/s10648-012-9202-2. Kintsch, W., & van Dijk, T. (1978). Toward a model of text comprehension and production. Psychological Review, 85, pp. 363-394. https://doi.org/10.1037/0033-295X.85.5.363. Kirschner, P. A., & De Bruyckere, P. (2017). The myths of the digital native and the multitasker. Teaching and Teacher Education, 67, pp. 135-142. https://doi.org/10.1016/j.tate.2017.06.001. Kirschner, P. A. & Hendrick, C. (2020). How learning happens: Seminal works in educational psychology and what they mean in practice. New York, NY: Rutledge. Kirschner, P. A., Sweller, J., Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist. 46(2), pp. 75-86. https://doi.org/10.1207/s15326985ep4102_1. Kirschner, P. A. & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education, Educational Psychologist, 48(3), pp. 169-183. https://doi.org/10.1080/00461520.2013.804395. Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), pp. 1121–1134. https://doi.org/10.1037/0022-3514.77.6.1121. Larsen, D. P. (2018). Planning education for long-term retention: The cognitive science and implementation of retrieval practice. Seminars in Neurology, 38(4), pp. 449–456. https://doi.org/10.1055/s-0038-1666983. Latimier, A., Peyre, H., Ramus, F. (2021). A meta-analytic review of the benefit of spacing out retrieval practice episodes on retention. Educational Psychology Review, 33, pp. 959-987. https://doi.org/10.1007/s10648-020-09572-8. Lee, F. J., & Taatgen, L. (2002). ‘Multitasking as Skill Acquisition’, CogSci’02: Proceedings of the Cognitive Science Society, August 2002. Leopold, C., & Leutner, D. (2012). Science text comprehension: Drawing, main idea selection, and summarizing as learning strategies. Learning and Instruction, 22, pp. 16–26. https://doi.org/10.1016/j.learninstruc.2011.05.005. Lethaby, C., & Harries, P. (2016). Learning styles and teacher training: Are we perpetuating neuromyths? ELT Journal, 70(1), pp. 16–27. https://doi.org/10.1093/elt/ccv051. Lorch, R., & Pugzles Lorch, E. (1996). Effects of headings on text recall and summarization. Contemporary Educational Psychology, 21(3), pp. 261-278. https://doi.org/10.1006/ceps.1996.0022. Macdonald, K., Germine, L., Anderson, A., Christodoulou, J., & McGrath, L. M. (2017). Dispelling the myth: Training in education or neuroscience decreases but does not eliminate beliefs in neuromyths. Frontiers in Psychology. 8:1314. https://doi.org/10.3389/fpsyg.2017.01314. Margaryan, A., Littlejohn, A., & Vojt, G. (2011). Are digital natives a myth or reality? University students’ use of digital technologies. Computers & Education, 56(2), pp. 429-440. https://doi.org/10.1016/j.compedu.2010.09.004. Mayer, R. (2004). Should there be a three-strikes rule against pure discovery learning? The case for guided methods of instruction. American Psychologist, 59(1), pp. 14-19. https://doi.org/10.1037/0003-066X.59.1.14. McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger, H. L. (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology, 103(2), pp. 399–414. https://doi.org/10.1037/a0021782. Mercimek B., Akbulut, Y., Dönmez, O., & Sak, U. (2020). Multitasking impairs learning from multimedia across gifted and non-gifted students. Educational Technology, Research and Development, 68(3), pp. 995-1016. https://doi.org/10.1007/s11423-019-09717-9. Nancekivell, S. E., Sun, X., Gelman, S. A., & Shah, P. (2021). A slippery myth: How learning style beliefs shape reasoning about multimodal instruction and related scientific evidence. Cognitive Science. https://doi.org/10.1111/cogs.13047. National Research Council. (2018). How people learn II: Learners, contexts, and cultures. Washington, DC: National Academy Press. Newton, P.M., & Salvi, A. (2020). How common is belief in the learning styles neuromyth, and does it matter? A pragmatic systematic review. Frontiers in Education. 5:602451.  https://doi.org/10.3389/feduc.2020.602451. Pashler, H., McDaniel, M., Rohrer, D., Bjork, R. (2009). Learning styles: Concepts and evidence. Psychological Science in the Public Interest 9: pp. 105–119. https://doi.org/10.1111/j.1539-6053.2009.01038.x. Paivio, A. (1969). Mental imagery in associative learning and memory. Psychological Review, 76, pp. 241-263. Paivio, A. (1986). Mental representations. New York, NY: Oxford University Press. Pennycook, G., & Rand, D. G. (2019). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of Personality, 88(2). pp. 185-200. https://doi.org/10.1111/jopy.12476. Pennycook, G., Ross, R. M., Koehler, D. J., & Fugelsang, J. A. (2017). Dunning-Kruger effects in reasoning: Theoretical implications of the failure to recognize incompetence. Psychonomic Bulletin and Review 24(6). pp. 1774-1784. https://doi.org/10.3758/s13423-017-1242-7. Perin, D., Lauterbach, M., Raufman, J., & Kalamkarian, H. S. (2017). Text-based writing of low-skilled postsecondary students: Relation to comprehension, self-efficacy and teacher judgments. Reading and Writing, 30(4), pp. 887-915. https://doi.org/10.1007/s11145-016-9706-0. Prensky, M. (2001). Digital natives, digital immigrants part 1. On the Horizon, 9(5), pp. 1-6. http://dx.doi.org/10.1108/10748120110424816. Prensky, M. (2012). Digital natives to digital wisdom: Hopeful essays for 21st century learning. Thousand Oaks, CA: Corwin. Riener, C. & Willingham, D. (2010). The myth of learning styles. Change 42(5): pp. 32-35.  Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1), pp. 20-27. https://doi.org/10.1016/j.tics.2010.09.003.  Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning taking memory tests improves long-term retention. Psychological Science, 17(3), pp. 249-255. https://doi.org/10.1111/j.1467-9280.2006.01693.x. Rogers, J., & Cheung, A. (2020). Pre-service teacher education may perpetuate myths about teaching and learning. Journal of Education for Teaching, 46(3), pp. 417-420. https://doi.org/10.1080/02607476.2020.1766835. Rogowsky, B.A., Calhoun, B.M., & Tallal, P. (2015). Matching learning style to instructional method: Effects on comprehension. Journal of Educational Psychology 107(1): pp. 64–78. https://doi.org/10.1037/a0037478. Rogowsky, B.A., Calhoun, B.M., & Tallal, P. (2020). Providing instruction based on students’ learning style preferences does not improve learning. Frontiers in Psychology. 11:164. https://doi.org/10.3389/fpsyg.2020.00164. Rohrer, D., & Pashler, H. (2010). Recent research on human learning challenges conventional instructional strategies. Educational Researcher, 39(5), pp. 406-412.  https://doi.org/10.3102/0013189X1037477. Rohrer, D. & Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education 46(7): pp. 634–635. https://eric.ed.gov/?id=ED535732. Rosenshine, B. V. (2012). Principles of instruction: Research-based strategies that all teachers should know. American Educator, 36(1), pp. 12-19. https://eric.ed.gov/?id=EJ971753. Rousseau, L. (2021). Interventions to dispel neuromyths in educational settings—A review. Front. Psychol. 12:719692. https://doi.org/10.3389/fpsyg.2021.719692. Ryan, R., & Deci, E. (2000). Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemporary Educational Psychology, 25, pp. 54-67. https://doi.org/10.1006/ceps.1999.1020. Ryan, R. M., & Deci, E. L. (2009). Promoting self-determined school engagement: Motivation, learning, and well-being. In K. R. Wenzel & A. Wigfield (Eds.), Handbook of motivation at school (pp. 171–195). Routledge/Taylor & Francis Group. Rowland, C. A. (2014). The effect of testing versus restudy on retention: A meta-analytic review of the testing effect. Psychological Bulletin, 140(6), pp. 1432-1463. https://doi.org/10.1037/a0037559. Sadoski, M., Goetz, E.T., & Avila E. (1995). Concreteness effects in text recall: Dual coding or context availability? Reading Research Quarterly 30(2): pp. 278-288. https://doi.org/10.2307/748038. Sana, F., Weston, T., & Cepeda, N. (2012). Laptop multitasking hinders classroom learning for both users and nearby peers. Computers & Education, 62, pp. 24-31. https://doi.org/10.1016/j.compedu.2012.10.003. Scott, C. (2010). The enduring appeal of ‘learning styles’. Australian Journal of Education 54(1): pp. 5-17. https://doi.org/10.1177/000494411005400102. Seabrook, R., Brown, G. D. A., & Solity, J. E. (2005). Distributed and massed practice: From laboratory to classroom. Applied Cognitive Psychology, 19(1), pp. 107-122. https://doi.org/10.1002/acp.1066. Sharps, M. J., & Price, J. L. (1992). Auditory imagery and free recall. The Journal of General Psychology, 119(1), pp. 81-87. https://doi.org/10.1080/00221309.1992.9921160. Shelton, A., Lemons, C., & Wexler, J. (2020). Supporting main idea identification and text summarization in middle school co-taught classes. Intervention in School and Clinic, 56(4), pp. 217-223. https://doi.org/10.1177/1053451220944380. Smith, J., Skrbis, Z., & Western, M. (2012). Beneath the ‘Digital Native’ myth: Understanding young Australians’ online time use. Journal of Sociology, 49(1), pp. 97-118. https://doi.org/10.1177/1440783311434856. Solis, M., Ciullo, S., Vaughn, S., Pyle, N., Hassaram, B., & Leroux, A. (2012). Reading comprehension interventions for middle school students with learning disabilities: A synthesis of 30 years of research. Journal of Learning Disabilities, 45(4), pp. 327-340. https://doi.org/10.1177/0022219411402691. Stevens, E. A., Park, S., Vaughn, S. (2019). A review of summarizing and main idea interventions for struggling readers in grades 3 through 12: 1978-2016. Remedial and Special Education, 40, pp. 131-149. https://doi.org/10.1177/0741932517749940. Stockard, J., Wood, T.W., Coughlin, C., & Khoury, C. R. (2018). The effectiveness of direct instruction curricula: A meta-analysis of a half century of research. Review of Educational Research. 88(4), pp. 479-507. https://journals.sagepub.com/doi/10.3102/0034654317751919. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), pp. 257-285. https://doi.org/10.1207/s15516709cog1202_4. Sweller, J. (2011). Human cognitive architecture: Why some instructional procedures work and others do not. In K. Harris, S. Graham, & T. Urdan (Eds.), APA Educational Psychology Handbook (Vol. 1). Washington, DC: American Psychological Association. Sweller, J. (2016). Working memory, long-term memory, and instructional design. Journal of Applied Research in Memory and Cognition, 5. pp. 360-367. https://doi.org/10.1016/j.jarmac.2015.12.002. Sweller, J., van Merriënboer, J. J. G., & Paas, F. (1998). Cognitive architecture and instructional design. Educational Review, 10. pp. 251-296. https://doi.org/10.1023/A:1022193728205. Thiede, K. W., Anderson, M., & Therriault, D. (2003). Accuracy of metacognitive monitoring affects learning of texts. Journal of Educational Psychology, 95, pp. 66-73. https://doi.org/10.1037/0022-0663.95.1.66. Torrijos-Muelas, M., González-Víllora, S., & Bodoque-Osma, A. R. (2021). The persistence of neuromyths in the educational settings: A systematic review. Front. Psychol. 11:591923. https://doi.org/10.3389/fpsyg.2020.591923. Vasconcellos, D., Parker, P. D., Hilland, T., Cinelli, R., Owen, K. B., Kapsal, N., Lee, J., Antczak, D., Ntoumanis, N., Ryan, R. M., & Lonsdale, C. (2020). Self-determination theory applied to physical education: A systematic review and meta-analysis. Journal of Educational Psychology, 112(7), pp. 1444-1469. https://doi.org/10.1037/edu0000420. Wang, S., Hsu, H., Campbell, T., Coster, D., & Longhurst, M. (2014). An investigation of middle school science teachers and students use of technology inside and outside of classrooms: Considering whether digital natives are more technology savvy than their teachers. Educational Technology Research and Development, 62(6), pp. 637-662. https://doi.org/10.1007/s11423-014-9355-4. Wang, Z. & Tchernev, J. (2012). The “myth” of media multitasking: Reciprocal dynamics of media multitasking, personal needs, and gratifications. Journal of Communication, 62, pp. 493-513. https://doi.org/10.1111/j.1460-2466.2012.01641.x. Welcome S.E., Paivio, A., McRae, K., & Joanisse, M. F. (2011). An electrophysiological study of task demands on concreteness effects: Evidence for dual coding theory. Experimental Brain Research 212(3): pp. 347-358. https://doi.org/10.1007/s00221-011-2734-8. Westby, C., Culatta, B., Lawrence, B., & Hall-Kenyon, K. (2010). Summarizing expository texts. Topics in Language Disorders, 30(4), 275287. https://eric.ed.gov/?id=EJ906737. Willingham, D. (2017). Ask the cognitive scientist: Do manipulatives help students learn? American Educator, 41(3), 25-40. https://www.aft.org/ae/fall2017/willingham. Willingham, D. (2019). Ask the Cognitive Scientist: Should Teachers Know the Basic Science of How Children Learn? American Educator, 43(2), pp. 30-36. https://www.aft.org/ae/summer2019/willingham. Wood, E., & Zivcakova, L. (2015). Multitasking in educational settings. In L. D. Rosen, N. A. Cheever, & M. Carrier (Eds.), The Wiley handbook of psychology, technology and society (pp. 404-419). Hoboken, NJ: John Wiley & Sons, Inc. Wooten, J. O., & Cuevas, J. A. (2024). The effects of dual coding theory on social studies vocabulary and comprehension in a 5th grade classroom. International Journal on Social and Education Sciences (IJonSES), 6(4), pp. 673-691. https://doi.org/10.46328/ijonses.696. Zhang, W. (2015). Learning variables, in-class laptop multitasking and academic performance: A path analysis. Computers & Education, 81, pp. 82-88. https://doi.org/10.1016/j.compedu.2014.09.012.   Appendix A 10 Pedagogy Knowledge Items Myths: • Learning styles: When classroom instruction is designed to appeal to students’ individual learning styles, students are more likely to learn more. (negatively coded/false) • Discovery learning: Students learn best if they discover information for themselves through activities with minimal guidance when they can randomly experiment on their own. (negatively coded/false) • Extrinsic Motivation: Students’ long-term learning outcomes are likely to be better if teachers and professors stimulate extrinsic motivation through things such as rewards. (negatively coded/false) • Multitasking: Incorporating instruction that involves students in multitasking activities, such as when they think about multiple concepts at once, leads to better learning outcomes. (negatively coded/false) • Digital natives: Students learn better when digital tools are incorporated within instruction practice because students today are naturally more adept at technology due to having used it from such a young age. (negatively coded/false) Effective Approaches: • Dual coding/Imagery for text: It is generally true that students’ memory of lesson content tends to be stronger if visuals and images are used to supplement class lectures, discussions, and readings. (true) • Summarisation: When students summarise content by reducing information into concise, essential ideas, it helps build students’ knowledge and strengthen skills (true) • Practice testing: Quizzes and practice tests using open-ended questions tend to boost learning even if students do not do well on them. (true) • Direct instruction: Direct instruction tends to be an ineffective method for teaching content to students. (negatively coded/false) • Spacing: Students tend to learn more when instruction is delivered via focused, intensive sessions delivered over a brief time period rather than when information is spread out and revisited over longer time spans. (negatively coded/false)   Appendix B 5 metacognition Items (Cronbach’s alpha = .787) • My teaching is heavily influenced by a strong familiarity with the research on how students learn. • I am confident that I am knowledgeable of the best practices to enhance student learning. • I am confident that I utilise the best practices to enhance student learning. • I feel less familiar with teaching strategies and best practices compared to my colleagues. (negatively coded) • I am not always confident that my knowledge of pedagogy and how students learn is strong as it could be. (negatively coded)   Declarations and Compliance with Ethical Standards Ethics Approval: All procedures performed were in accordance with established ethical standards and were approved by the University of North Georgia Institutional Review Board. Informed Consent:  Informed consent was obtained from all required participants included in the study. Competing Interests: This research was not granted-related. The authors declare that they have no conflict of interest. Funding: This research was not funded. Data Availability: The data associated with this study are owned by the University of North Georgia. Interested parties may contact the first or second author regarding availability and access, and requests will be considered on an individual basis according to institutional guidelines. Anonymised data output sheets are available by contacting the first or second author of the study.   Gallery



Bing

Discerning Myths from Methods: University Faculty’s Understanding of Learning Science and Metacognition on Pedagogy

https://researchandeducation.ro/

Download PDF Article Download Graphical Abstract Discerning Myths from Methods: University Faculty’s Understanding of Learning Science and Metacognition on Pedagogy     Abstract This study assessed the pedagogical knowledge and metacognitive awareness of pedagogy of faculty (N = 107) at a large state university in the United States. The purpose was to ascertain whether faculty could distinguish effective learning practices from ineffective ones, as determined by empirical research in learning science. Faculty responded to items regarding the efficacy of effective practices and others shown by research to be neuromyths or misconceptions. Faculty across all colleges correctly identified most of the effective practices but also endorsed myths/misconceptions, ultimately showing limited pedagogical knowledge. Tenured faculty showed stronger pedagogical knowledge than newer faculty. Faculty were also assessed on their confidence in their knowledge of pedagogical practices. Respondents demonstrated poor metacognitive awareness as there was no relationship between confidence in pedagogical knowledge and actual pedagogical knowledge. Surprisingly, education faculty scored no better in pedagogical knowledge than faculty of any other college and also showed low metacognitive awareness. Results indicate that universities preparing doctoral students for faculty positions should ensure candidates are exposed to accurate information regarding learning science. The implications for colleges of education are more dire in that they may be failing to prepare candidates in the most essential aspects of the field.   Keywords pedagogy, learning science, cognitive myths, metacognition, faculty, teaching methods JEL Classification I20, I21, I23   1. Introduction Teaching, whether at the university level or K-12 level, is generally considered to be comprised of two broad fields of knowledge: content knowledge and pedagogical knowledge. There is little doubt as to the relative value of content knowledge because instructors would be unable to guide student learning on a subject they had little understanding of themselves. In this scenario the instructor would not be able to recognise or facilitate learning of accurate information, nor would the instructor be able to address misconceptions. Furthermore, without sufficient content knowledge the instructor also cannot effectively scaffold student learning (Kirschner & Hendrick, 2020). In all fields, at the foundational level, content knowledge provides the structural groundwork for teaching and learning. The second of the two broad fields of knowledge essential to teaching, pedagogical knowledge, focuses on how humans learn and the processes that may best facilitate that learning. Pedagogical knowledge may entail sociocultural aspects of learning, such as an understanding of how the learning environment may impact the ways students respond to instruction or how language and culture influence a child’s construction of their own knowledge. Pedagogical knowledge also draws upon cognitive science, or more specifically, what has become known as the learning sciences. This would include knowledge about, for example, how memory processes unfold or how cognitive load may impact a student’s ability to learn unfamiliar information (Sweller, 1988). While there is no debate over the importance of content knowledge in teaching and learning, there is some question regarding whether sufficient attention is devoted to the psychological science on learning that underpins pedagogical knowledge (Cuevas, 2019; Willingham 2019). Certainly, the curricula for those who are preparing to be university instructors in any field, e.g., physics, history, etc., focus predominantly on content knowledge, as PhD programmes are expected to produce professors with the highest-level of content knowledge possible. In contrast, programmes that prepare K-12 teachers attempt a greater balance with more attention paid to how to teach material given the developmental level and background of the learner. However, research over previous decades has shown that many educators maintain beliefs that can be classified as cognitive myths (Kirschner & van Merriënboer, 2013; Rogers & Cheung, 2020; Rousseau, 2021). One large study (N = 3877) found that neuromyths, or misconceptions about education and learning, were prevalent among educators, though the effect was moderated by higher levels of education and having exposure to peer reviewed science (Macdonald et al., 2017).   2. Literature Review A recent systematic review concluded that nearly 90% of educators maintained a belief in neuromyths about learning styles (Newton & Salvi, 2020), and other research has found that neuromyths in general may be particularly resistant to revision among those in teacher education programmes (Rogers & Cheung, 2020). Research also suggests the belief in neuromyths among educators to be so prevalent that interventions are now being proposed to correct those misconceptions (Rousseau, 2021). If misconceptions about how humans learn are as widespread as research seems to indicate, then the issue would be twofold: instructors would likely be incorporating ineffective methods into their lessons due to the mistaken assumption that such methods will benefit student learning, while they also bypass the use of strategies shown to be effective. Thus, in practice, ineffective methods would displace the instructional strategies known to enhance learning (Rohrer & Pashler, 2010). Therefore, we reasoned that an understanding of pedagogy based on learning science would be reflected by two characteristics: 1) whether faculty were familiar with well-established learning concepts, and 2) whether they held misconceptions regarding how students learn. In this study we documented the level of pedagogical knowledge of faculty at a large state university in the United States with a teaching emphasis by assessing their understanding of basic learning concepts and misconceptions about learning. A second, yet essential component of this research, was to measure faculty’s level of metacognitive awareness of their pedagogical knowledge, as an understanding of their own knowledge of how learning occurs is likely to influence their willingness to search out more effective practices (Pennycook et al., 2017). Learning Concepts We identified ten learning concepts, five of which were well-established by research as beneficial to learning and five of which research has debunked and would be classified as myths or misconceptions. In considering the different learning concepts, we selected concepts for which there would be little debate among learning scientists regarding their efficacy. Many options were not included because the extent of their effectiveness may depend on nuances regarding their delivery, and as such may be effective in some circumstances but not in others. For instance, the use of manipulatives shows much promise for students in certain content areas and of certain ages but is not necessarily universally beneficial in all circumstances (Willingham, 2017). The ten we chose were applicable to all content areas and age groups. The following sections constitute succinct summaries of what we deem to be the current research consensus on each of the learning concepts included in the study. More in-depth explanations of each concept can be found in Cuevas, et al., 2023.   Myths and Misconceptions Multitasking  Multitasking is defined as the ability to manage and actively participate in multiple actions simultaneously (Lee & Taatgen, 2002; Wood & Zivcakova, 2015), and a critical analysis of multitasking relies on a framework for working memory and cognitive load (Sweller, 2011). Working memory is an individual’s capacity to store and manipulate information over a short period of time (Just & Carpenter, 1992) and is described as a “flexible mental workspace” (Alloway et al., 2006, p. 1698); cognitive load refers to the capacity for processing information imposed by working memory (Sweller, 2011). As learners manipulate and process information within working memory, the cognitive load bearing increases. Generally, if the learning environment has too many unnecessary components, this would create extraneous cognitive load, negatively affecting the capacity of a student to learn. Mercimek et al.’s (2020) findings showed multitasking actually impedes students’ learning. Other research indicated that engaging in multitasking while completing academic tasks had a negative impact on college GPA (Junco & Cotten, 2011), exam grades (Sana et al., 2012), and class grades (Demirbilek & Talan, 2017; Zhang, 2015), suggesting an impairment in cognitive processing. Wang and Tchernev (2012) also note that multitasking results in diminished cognitive performance as memory and cognitive load are taxed by competing stimuli, negatively affecting outcomes; therefore, encouraging multitasking is likely to be detrimental to students.  Learning Styles The notion that individuals have learning styles and that tailoring instruction to these modalities can enhance students’ learning is among the most persistent cognitive myths (Kirschner & van Merriënboer, 2013; Riener & Willingham, 2010; Torrijos-Muelas et al., 2021), one that can have detrimental effects on students (Scott, 2010). Decades after learning styles-based instruction found wide use in educational settings across a wide spectrum of grade levels and in many countries, exhaustive reviews suggest that nearly all available research evidence has indicated that learning styles do not exist and that adapting instruction to them has no educational benefits (Coffield et al., 2004; Cuevas, 2015, 2016a; Pashler et al., 2009; Rohrer & Pashler, 2012). Well-designed experiments testing the hypothesis have continually debunked the premise, concluding that teaching to learning styles does not enhance learning (Cuevas & Dawson, 2018; Rogowsky et al., 2015, 2020). The persistent myth of learning styles has been identified as substantial problem in education that impacts teacher training and the quality of instruction across k-12 and college classrooms (Cuevas, 2017) with some researchers recently exploring ways to dispel such detrimental neuromyths (Nancekivell et al., 2021; Rousseau, 2021). Digital Natives  Recent generations (e.g., “Millennials” and “Gen Z”) have been characterised as being inundated with technology since birth, and as a result, Prensky (2001) suggested that these learners were digital natives who may learn best when teachers integrate technology into instruction. However, researchers have concluded that these claims are not grounded in empirical evidence or informed by sound theoretical perspectives (Bennett et al., 2008; Jones et al., 2010). Margaryan et al. (2011) similarly found no evidence that this generation of learners had differing learning needs. Kirschner and De Bruyckere (2017) contend that the concept of digital natives is a myth as these learners often do not utilise technological tools effectively and the use of technology can actually adversely affect knowledge and skill acquisition. Wang et al. (2014) concluded that while students may know how to use technology fluidly within the context of entertainment or social media, they still need guidance from teachers to use technology to support learning. Both Smith et al. (2012) and Hite et al. (2017) came to the conclusion that modern students must be taught to use technology for learning purposes just as previous generations were, and its use does not come naturally in learning environments. Prensky (2012) later revised the concept of digital natives, acknowledging that the premise lacks empirical support. Ultimately, education research suggests that the concept of students being digital natives lacks merit. Pure Discovery Learning In their purest form, discovery learning and other similar methods of instruction such as inquiry learning and project-based instruction are designed to focus on maximum student involvement with minimal guidance from the instructor (Clark et al., 2012; Mayer, 2004). Despite the popularity of such approaches, decades of empirical research have shown that minimally guided instruction is not effective at enhancing student performance (Mayer, 2004) and is not consistent with the cognitive science on human learning (Sweller et al., 1998). Learning is defined by change occurring in long-term memory (Kirschner & Hendrick, 2020) which could encompass higher order processes and updates to episodic, semantic, and procedural memory (Cuevas, 2016b). But because students are tasked with navigating unfamiliar territory on their own during discovery-type learning, such minimally guided instruction places a heavy burden on working memory (Kirschner et al., 2006), leaving fewer cognitive resources available to contribute to encoding new information into long-term memory. Cognitive Load Theory suggests that effective instructional methods should decrease cognitive load and that approaches that instead tax working memory and increase cognitive load, as unguided methods do, result in subsequently less learning (Kirschner & Hendrick, 2020; Sweller, 2016).  Extrinsic Motivation  According to Deci et al. (1991) motivation within an education context entails student interest, capacities, and a sense of valuing learning and education. Motivational tendencies can be described as either intrinsic or extrinsic. Educational motivation can manifest in students completing tasks because of curiosity, awards, interest in a topic, approval from a parent or teacher, enjoyment in learning a new skill, or receiving a good grade (Ryan & Deci, 2000). Educators must attempt to foster students’ motivation which may result in extrinsic strategies, such as rewards to promote learning (Ryan & Deci, 2009). However, the use of extrinsic motivators can negatively affect students’ motivation if learning is contingent on rewards (Deci et al., 2001; National Research Council, 2018). Instead, teachers should focus on fostering intrinsic motivation by helping students develop academic goals and monitor learning progress while encouraging autonomy and choice (National Research Council, 2018). Gill et al. (2021) found a positive relationship between intrinsic motivational factors and the development of goals and well-being, suggesting that educators should focus on intrinsic motivation as a basis for learning. Vasconcellos et al. (2020) concluded that external motivators were negatively associated with adaptive outcomes and positively associated with maladaptive ones. Decades of research on extrinsic rewards suggest that they do not support learning or healthy motivational behaviours long term, and thus, intrinsic motivational factors should supplant them to promote learning. Established Learning Principles Retrieval Practice and the Testing Effect Research has clearly demonstrated that having students engage in retrieval practice, when they are tasked with attempting to retrieve learned information from memory, improves long-term retention (Roediger & Butler, 2011; Roediger & Karpicke, 2006; Rowland, 2014). Meta analyses indicate that the use of retrieval practice is more effective than re-studying for both simple and complex information (Karpicke & Grimaldi, 2012; McDaniel et al., 2011). Retrieval practice often takes the form of practice tests and quizzing, and even just a single retrieval session is sufficient to stimulate stronger retention of information than not engaging in testing at all. More than a century of empirical research on what is known as the testing effect has consistently indicated that the use of practice tests, either as a classroom activity or a form of studying, promotes increased learning and retention compared to more commonly used study strategies (Roediger & Karpicke, 2006). Meta analyses have found that retrieval practice tends to produce the strongest effects in mathematics but that it impacts learning across all content areas, and its positive effects are intensified when students are provided with feedback in response to the practice tests (Bangert-Drowns et al., 1991). Practice testing also produces substantial benefits in enhancing students’ free recall and long-term retention while reducing forgetfulness (Roediger & Karpicke, 2006). Dual Coding Decades of research have firmly established that pairing images with verbal or textual information assists learning and retention of information. This process is explained by Dual Coding Theory, a concept Paivio pioneered in 1969 and expanded upon in 1986. The theory asserts that humans have two separate cognitive systems for processing information, one verbal and one visual, and that when the two are combined there is an additive effect that allows for greater retention than would be possible with just one of the systems being incorporated (Clark & Paivio, 1991; Cuevas, 2016a; Kirschner & Hendrick, 2020). The two systems are interconnected but are functionally independent, an important feature because if the two systems did not function independently, cognitive overload would result, as it often does as a consequence of excessive stimuli. Instead, cognitive load remains low when images are used to supplement linguistic information and memory is enhanced due to there being two storage systems. Indeed, Cognitive Load Theory, later developed by Sweller (Kirschner & Hendrick, 2020; Sweller, 1988) relies heavily on Dual Coding Theory. Neurological research has provided evidence for the processes that allow for images to enhance memory (Di Virgilio & Clarke, 1997; Fiebach & Friederici, 2003; Welcome, et al., 2011). Additionally, a great deal of experimental research from the field of cognitive psychology has documented the benefits of dual coding on human learning and its potential for use in educational settings (Cuevas & Dawson, 2018; Hodes, 1998; Sadoski, et al. 1995; Sharps & Price, 1992; Wooten & Cuevas, 2024). Summarisation  The effectiveness of summarisation is linked to metacognition and self-regulation, two essential components of learning (Day, 1986; Leopold & Leutner, 2012). Kintsch and van Dijk (1978) and Brown and Day (1983) proposed important characteristics for summarising or condensing information, specifically, deleting irrelevant information, identifying pertinent information or specific supporting ideas, writing cohesive statements or topic sentences for each idea, and reconstructing ideas. These features constitute a frame for students to translate learned information into their own words (Westby et al., 2010). They are related to evidence-based strategies such as K-W-L charts (Carr & Ogle, 1987), drawing concepts (Leopold & Leutner, 2012), concept maps (Chang et al., 2002), and the use of headings to structure comprehension (Lorch & Pugzles Lorch, 1996). Summarisation strategies have been shown to be particularly helpful to students of varying age groups and ability levels in building comprehension (Hagaman et al., 2016; Shelton et al., 2020; Solis et al. 2011; Stevens et al., 2019), a skill that is vital for student success (Bogaerds-Hazenberg et al., 2020; Perin et al., 2017). Ultimately, decades of research indicate that summarisation assists students in processing information into a condensed structure and is an effective strategy in supporting reading comprehension and developing content knowledge for students. Direct Instruction Teacher-centred strategies such as direct instruction have tended to lose support in pedagogical circles as student-centred forms such as discovery learning have become more popular (Clark, et al. 2012). Yet direct instruction has long been established as being among the most effective forms of instruction (Kirschner & Hendrick, 2020). The method is comprised of well-supported strategies such as the teacher activating background knowledge and fostering introductory focus to start lessons, the modelling of skills and processes for students, providing well-structured explanations which include chunking of information into manageable portions, and guiding independent practice after students have had sufficient support and have become familiarised with the concepts. Each of these are aspects of successful scaffolding, and each is strongly supported by research in cognitive science and educational psychology (Rosenshine, 2012). The evidence for the effectiveness of these different features comprising direct instruction is so thorough that throughout the American Psychological Association’s top 20 principles of teaching and learning (2015) there are sections dedicated to specific components of direct instruction. Additionally, two comprehensive meta-analyses captured the extent of research support for the method. One found consistently significant and positive effects on student learning across 400 studies over 50 years covering all subject areas (Stockard et al. 2018). Another compared the effects of student-centred approaches and direct instruction based on studies across a ten-year period and concluded that the positive effects of direct instruction employing full teacher guidance were far greater than student-driven approaches (Furtak et al. 2012). Spacing Spacing, or distributed learning, occurs when an instructor or student intentionally inserts time intervals between learning sessions based on the same content. Introducing time intervals between study sessions results in stronger retention than massing practice and limits forgetting (Cepeda et al., 2009; Latimier et al., 2021). Research has shown distributed learning to be effective across many different domains, populations, age groups, and development levels, in each case resulting in substantial improvements in long-term retention (Carpenter et al., 2012; Larsen, 2018; Seabrook et al., 2005). Kirschner and Hendrick (2020) argue that distributed practice is among the most well-established procedures for enhancing learning. In one large, well-designed study, Cepeda et al. (2008), concluded that optimal retention occurred when learners studied information on multiple occasions with gaps between different study periods and tests administered at different time intervals. Dunlosky et al. (2013) noted that while spacing is not consistently or intentionally practiced in formal learning environments, the practice has high utility value due to its consistent benefits to students across such a wide range of variables. Rohrer and Pashler (2010) contend that while there is extensive research evidence supporting the effectiveness of using spacing as an instructional strategy, unfortunately, relatively little attention is devoted to its use in practice. Current Study The principal purpose of this study was to assess the pedagogical knowledge of faculty at a large state university in the U.S. with a teaching emphasis, specifically their knowledge of practices for which there is abundant research evidence for or against. Faculty at research universities primarily focus on research output, whereas faculty at teaching universities devote the majority of their time and effort to delivering instruction. Thus, it seemed logical to assess faculty’s knowledge of teaching at a university where teaching, and therefore pedagogy, is the greater emphasis. Additionally, because the field of education is predominantly concerned with pedagogy, ostensibly, education professors would be expected to show the strongest understanding of these concepts, though faculty from all departments within the university were assessed.  A secondary purpose of the study was to gauge professors’ metacognitive awareness by ascertaining their confidence levels in their pedagogical knowledge and whether their self-assessments aligned with their actual level of knowledge. The implication is that if faculty showed high confidence but low knowledge and therefore low levels of metacognition, they would be unaware of their misconceptions. As a result, such faculty would be unlikely to seek out professional development opportunities or investigate approaches that may ultimately improve their instruction. If, on the other hand, faculty showed stronger metacognition, with low confidence and also low knowledge, they would likely be more willing to engage with sources to improve the delivery of their content because they would be aware of their limitations in that regard. Finally, if they showed strong metacognition with high confidence and high levels of knowledge, this would be the ideal scenario and should result in favourable learning outcomes as long as the faculty also had sufficient understanding of content and socio-cultural awareness.    The study was guided by the following research questions: Do faculty members show a strong understanding of well-established concepts regarding learning as established by cognitive science? Which learning concepts do faculty show the most misunderstanding of (i.e., which myths do they tend to endorse or which established concepts do they tend to reject)? Are there differences in the level of knowledge of learning practices between faculty members from different disciplines? For instance, do faculty from education score significantly higher than faculty in other areas in this regard? Are there faculty from certain content areas who show a prevalence for believing in myths or for not being aware of established learning principles? Are there differences in the level of knowledge of learning practices between faculty members according to rank, i.e., university experience? Do faculty show metacognitive awareness of their level of knowledge of teaching and learning practices?   3. Methodology Contextual Factors and Participants  Data were collected from faculty at a midsized public university comprised of five campuses in the southeastern United States with a total enrolment of approximately 18,000 students and 980 faculty at the time of data collection. The student-to-faculty ratio is calculated to be 18:1, and 74% of faculty are full-time, indicating a low proportion of adjunct faculty. According to the Carnegie Classification system, the university is classified under “Master's Colleges & Universities: Larger Programs”. The vast majority of the students enrolled and the degrees the university confers are at the undergraduate level, but the institution also offers master’s and doctoral degrees.  The institution contains a traditional composition of colleges for a state university: Arts and Letters, Business, Education, Health Sciences and Professions, Science and Mathematics, and University College (interdisciplinary studies). The participants were comprised of full-time faculty (N = 107) from each college at the university. The breakdown of responses by college were approximately proportional to the relative size of each college (n = 37, n = 7, n = 14, n = 6, n = 31, n = 4, respectively, with eight declining to identify their college). Respondents were evenly distributed according to rank: Professors (n = 26), Associate Professors (n = 29), Assistant Professors (n = 28), Lecturers/Instructors (n = 22), with 79% being tenured or tenure track and two declining to specify. Of all faculty responding to the broader survey (N = 186), 54.3% identified as women, 39.8% identified as men, 2.2% identified as non-binary or non-conforming, and 3.7% chose not to answer. Data on age were not available.  Design The study used a non-experimental, cross-sectional design that primarily relied on group comparison for the essential analyses. Data were analysed quantitatively using descriptive statistics, analysis of variance, and Pearson correlation. The study did not include an intervention, and data were collected during a single time point. Though data were collected via a survey instrument, an objective assessment of knowledge was used as the primary measure instead of dispositional constructs that would more commonly be the focus of a survey.  Instrument The Diverse Learning Environments (DLE) survey was distributed by the Higher Education Research Institute (HERI) electronically to students and faculty across all five campuses. The DLE is administered by the UCLA HERI to colleges and universities across the United States (HERI, 2015) and was designed to assess stakeholders’ perceptions on constructs such as institutional climate, satisfaction with the academic and work environment, institutional diversity, individual social identity, intergroup relations, and discrimination. A more detailed description of the DLE and related variables can be found in Dawson and Cuevas, 2019. For this study we analysed the variables related to the faculty portion of the survey. The faculty section of the DLE used for this study was comprised of 95 items, including questions about demographic variables, rank and tenure status, course information, involvement in activities such as research and scholarship, instructional strategies, technology, DEI, COVID-19, satisfaction with workplace environment, and salary. The majority of the items utilised a Likert scale, with some Yes/No response items and accompanying follow-up questions. The final 35 items were comprised of “local optional questions” which were added to the survey by the institution beyond the original DLE items in order to address specific questions of interest. The items used for this study were included in the local optional questions and were grouped into two constructs: pedagogical knowledge items or confidence in pedagogical knowledge. The design, selection, and scoring of the items are discussed below. Pedagogical Knowledge Items To assess pedagogical knowledge, a 10-item scale was created. The scale consisted of five common myths and misconceptions about learning and five learning strategies that have been well-established by research. The five myths entailed the following concepts: learning styles, discovery learning, the efficacy of fostering extrinsic motivation, the efficacy of multitasking, and the existence of digital natives. The five well-established learning concepts consisted of the following: dual coding, summarisation, practice testing, direct instruction, and spacing. The rationale was that pedagogical knowledge could be assessed through two general propositions - how many misconceptions an educator held about learning and how many effective approaches they were aware of. The items were limited by two main factors: 1) Due to the length of the overall survey we were only able to insert a small number of additional items because of concerns over time restraints and the respondents’ likelihood of answering a lengthy questionnaire, and 2) there needed to have been clear and well-established research evidence for or against each item with each concept as close to being “settled” science as possible. Concerning this second factor, there are still many learning concepts under scrutiny or that may show efficacy in some circumstances but not others, and if this was the case, then we could not consider them. To ensure that the format of the items was consistent with the rest of the survey items on the DLE, they were presented on a 4-point Likert scale, from “Strongly Disagree” to “Strongly Agree”. However, the responses were scored dichotomously as either correct or incorrect. For instance, for the effective learning approaches, if respondents agreed that they were effective by answering either “agree” or “strongly agree”, they were credited with answering the item correctly. If they answered “disagree” or “strongly disagree” they were credited with an incorrect answer. Alternately, for the myths and misconceptions, if respondents agreed or strongly agreed that they were effective, then that was treated as an incorrect answer, whereas if they disagreed or strongly disagreed that they were effective, then this was scored as a correct answer. The scale can be found in Appendix A.  Confidence in Pedagogical Knowledge Scale  In psychological and educational research, metacognition or metacognitive awareness has traditionally been measured by gauging one’s confidence in their knowledge, skills, or expertise on a topic via a survey and then assessing them through an objective test to ascertain whether the two are positively correlated (Anderson & Thiede, 2008; Dunlosky et al., 2005; Thiede et al., 2003). If the individual’s self-assessment positively correlates with their actual knowledge level, then this would indicate strong metacognition. For instance, if the person rated their own knowledge as high and they scored high on the objective assessment, they would have shown good metacognitive awareness. Likewise, if they rated their own knowledge as low and scored low on the assessment, this would also show good metacognitive awareness because the individual would have provided an accurate self-assessment and would be aware that they were not knowledgeable of the subject. In contrast, an individual would show poor metacognition if there was a negative correlation or no correlation between their self-assessment and the objective assessment. It could be that the person assessed themselves as having low levels of knowledge but actually scored high on the assessment, which may be the case if the individual suffered from anxiety or was simply unsure of their own ability. In this example the individual underestimated their knowledge or ability. But the more common form of a lack of metacognitive awareness would be when a negative correlation between the two measures occurred due to the individual overrating their own knowledge on the self-assessment but scoring poorly on the objective assessment on the topic. In essence, they would have believed themselves to be highly knowledgeable while in reality their knowledge level was low. They did not recognise their own lack of knowledge due to having a limited understanding of the field. This is a very common research finding known as the Dunning-Kruger effect (Kruger & Dunning, 1999; Pennycook et al., 2017) wherein a person overestimates their own competence and is unaware that they lack sufficient knowledge of a subject. When individuals contend that they have high levels of knowledge that they actually do not, it is known as overclaiming (Atir, et al., 2015; Pennycook & Rand, 2019). To assess metacognitive awareness regarding faculty’s knowledge of pedagogical practices and human learning, we developed a five-item confidence scale. These items asked about familiarity with research on learning and the influence this has on their instruction, confidence in their knowledge of best practices, confidence in their use of best practices, and their familiarity with pedagogical practices compared to their peers. Again, these items were presented on a 5-point Likert scale. In this case, a composite score was derived for each faculty member that represented a self-assessment of their familiarity and knowledge of best practices in regard to student learning. This score was then used to conduct a Pearson correlation between confidence level in pedagogical knowledge and actual pedagogical knowledge as determined by the scores on the pedagogical knowledge items to calculate the level of metacognitive awareness of faculty in regard to learning concepts. When tested for reliability, the Cronbach’s alpha coefficient for this scale was .79. The scale can be found in Appendix B.   4. Results Descriptive Statistics for Pedagogical Knowledge by College and Concept In order to address the first research question pertaining to the faculty members’ overall understanding of the 10 pedagogical concepts, a mean score was tabulated for all respondents (N = 107). Individual scores could range from 0, if the respondent answered all questions incorrectly, to 10, if all questions were answered correctly. A correct answer represented agreement that well-established research-based pedagogical concepts were effective or when the respondent disagreed that the myths or misconceptions were effective learning practices. Values were reverse-scored for negatively coded items. The mean score across all faculty on the pedagogical knowledge items was slightly above the midpoint of the scale (M = 6.11). This result does not indicate strong pedagogical knowledge and reveals that faculty answered fewer than 65% of the items correctly on average. Below in Table 1, the descriptive statistics with the mean scores for the pedagogical items of faculty are organised by the participants’ affiliated college. College N Mean Std. Dev Std. Error Arts and Letters 37 6.38 1.77 0.29 Business 7 6.00 1.00 0.38 Education 14 5.93 2.06 0.55 Health Sciences  6 6.33 0.82 0.33 Sciences & Math 31 6.00 1.73 0.31 University College 4 6.00 1.83 0.91 Unidentified 8 5.63 1.30 1.30 Total 107 6.11 1.67 0.16 Range of possible scores: 0 – 10 Table 1. Pedagogical Knowledge Scores by College   The second question addressed the respondents' knowledge of effective learning practices in terms of which learning concepts they demonstrated the most and least understanding of. Across faculty from all colleges, the myth or misconception items that instructors scored most poorly on were learning styles (33% correct) and discovery learning (33% correct), with two-thirds of respondents indicating they believed those practices to benefit learning. Additionally, less than half the faculty answered the questions correctly on multitasking (40.6% correct) and digital natives (43.8% correct). While faculty were not effective at identifying misconceptions related to pedagogy, they were more accurate in identifying effective approaches with nearly all faculty reporting that direct instruction (97.2% correct) and practice testing are beneficial to students (94.4% correct). Furthermore, faculty recognised the importance of spacing (85.7% correct) and, to a lesser extent, summarisation (66.7% correct) and dual coding (65.4% correct). The full breakdown of correct responses by learning concept across all faculty and according to college can be found in Table 2 below. Of note, data in Table 1 are based on means calculated for each college even if a faculty member did not complete all the items, but data in Table 2 required the faculty member to complete all subscales. Because the College of Arts and Letters and the College of Science and Math had several faculties who did not complete each item, the sample size in Table 2 was reduced by 4. Faculty from the College of Education, who should show the most expertise regarding pedagogy, only scored higher than faculty in other colleges regarding myths and misconceptions about one concept. They were more likely to recognise that unguided discovery learning is ineffective (57.1% correct). College of Education faculty scored similarly to faculty from other colleges on all the other concepts, both for the myths and misconceptions and the effective practices. Additionally, College of Education faculty were split regarding the effectiveness of summarisation for students’ learning (50% correct). Faculty across colleges were largely able to identify effective practices, such as retrieval and direct instruction, but scored somewhat lower regarding dual coding and summarisation. Learning Concept All Faculty (N =103) Arts & Letters (N = 36) College of Business (N = 7) Education (N = 14) Health Science (N = 6) Science and Math (N = 29) University College (N = 4) Learning Styles (M/M) 33.0 44.4 14.3 28.6 33.3 31.0 25.0 Discovery Learning (M/M) 33.0 27.8 28.6 57.1 33.3 25.8 50.0 Extrinsic Motivation (M/M) 60.2 66.7 42.9 64.3 50.0 55.2 50.0 Multitasking (M/M) 40.6 40.5 28.6 35.7 50.0 51.6 0.00 Digital Natives (M/M) 43.8 55.6 14.3 35.7 66.7 40.0 50.0 Dual Coding (EP) 65.4 73.0 71.4 64.3 50.0 58.1 50.0 Summarisation (EP) 66.7 64.9 100 50.0 66.7 69.0 100.0 Retrieval Practice (EP) 94.4 89.2 100.0 92.9 100.0 96.8 100.0 Direct Instruction (EP) 97.2 97.3 100.0 92.9 100.0 96.7 100.0 Spacing (EP) 85.7 83.8 100.0 71.4 83.3 96.6 75.0 *(M/M) = myth/misconception *(EP) = effective practice Table 2. Correct Responses on Each Learning Concept Across All Faculty and By College   Pedagogical Knowledge by Academic Discipline/College For the third research question, we sought to answer whether there were differences in pedagogical knowledge regarding the 10 concepts according to academic field. In particular, we were interested in determining whether education faculty showed stronger pedagogical knowledge compared to faculty from other fields since pedagogy is the core content of education professors. An examination of the descriptive statistics for the mean scores by college (see Tables 1 and 2 above) revealed the mean scores of faculty from the various colleges were between 5.63 and 6.38 out of a possible 10. There was no college where faculty averaged more than 65% correct answers. Additionally, only the subset of faculty who chose not to identify their college scored lower in pedagogical knowledge than those from the College of Education. Faculty from all other colleges outperformed education faculty regarding their level of pedagogical knowledge.  To ascertain whether there were statistical differences in pedagogical knowledge according to academic field, faculty were grouped by college (e.g., Arts and Letters, Business, Education, etc.). ANOVA analyses were conducted with scores from the pedagogical knowledge scale as the dependent variable. For all ANOVA analyses, equal variances across groups were assumed, as the assumption of homogeneity of variance was not violated in any of the ANOVA models. Results found no statistically significant differences in pedagogical knowledge between faculty from different colleges overall, F (5, 93) = 0.253 p = .937, η2 = .013. Thus, while an examination of the descriptive statistics revealed that education faculty did not outperform faculty from other areas, inferential analysis indicated that all faculty from across the university scored similarly regarding pedagogical knowledge across all items.  To extend the analyses, we sought to determine whether there were differences in pedagogical knowledge according to academic field regarding either the myths and misconceptions or the effective practices. ANOVA analyses revealed that there were no differences by college in either the myths/misconceptions, F (5, 93) = 1.428, p = .221,  η2 = .071 or the effective practices, F (5, 93) = 1.700, p = .142, η2 =.084. Therefore, faculty from all colleges demonstrated similar levels of knowledge of both myths/misconceptions and effective practices. These results were surprising because faculty from the College of Education would be expected to score higher in pedagogical knowledge for both myths/misconceptions and effective practices since pedagogical knowledge is central to their field. Yet this was not the case. There would be no reason to expect faculty from the other colleges to perform better or worse than any other college since pedagogical knowledge generally does not fall directly within their field of study or expertise. Pedagogical Knowledge by Rank Also of interest was whether faculty performed differently regarding pedagogical knowledge according to academic rank, which may be considered a reflection of teaching experience. Respondents were grouped by rank (i.e., instructor, lecturer, assistant professor, associate professor, professor), and ANOVAs were conducted with scores on overall pedagogical knowledge, myths/misconceptions, and effective practices as dependent variables. Only full-time faculty were included in the sample. Significant main effects by rank were revealed for overall pedagogical knowledge, F (4,100) = 3.020, p = .021, η2 = .108, and for the myths/misconceptions, F (4, 100) = 2.836, p = .028, η2 = .102, but not for the effective practices, F (4, 100) = 1.455, p = .222, η2 = .055.  For overall pedagogical knowledge, LSD post hoc analyses identified that professors (p = .008), associate professors (p = .019), and lecturers (p = .044) scored significantly higher than assistant professors. Furthermore, professors scored significantly higher than lecturers (p = .038). The LSD post hoc analyses for myths/misconceptions revealed that professors were better able to correctly identify myths significantly more than assistant professors (p = .008) and instructors (p = .015).  Full and associate professors outperformed less experienced professors, potentially due to having more expertise and more years of experience in instruction on average. Descriptive statistics regarding rank may be found below in Tables 3, 4, and 5. Note that in Table 3 the possible score was between 0 and 10, whereas in Tables 4 and 5, the possible score was between 0 and 5. For myths and misconceptions, the scores for faculty at all ranks fell below the 50% mark (M = 2.5) except for professors, who scored just above that (M = 2.55). College N Mean Std. Dev Std. Error Full Professors 26 6.58 1.27 0.25 Associate Profs 29 6.41 2.13 0.40 Assistant Profs 28 5.39 1.47 0.28 Lecturers 18 6.39 1.42 0.33 Instructors 4 4.75 0.50 0.25 Total 105 6.11 1.68 0.16 Table 3. Pedagogical Knowledge scores by Rank Overall   College N Mean Std. Dev Std. Error Full Professors 26 2.55 0.50 0.10 Associate Profs 29 2.43 0.41 0.08 Assistant Profs 28 2.25 0.38 0.07 Lecturers 18 2.44 0.39 0.09 Instructors 4 2.00 0.16 0.08 Total 105 2.40 0.43 0.04 Table 4. Pedagogical Knowledge Scores by Rank for Myths and Misconceptions   College N Mean Std. Dev Std. Error Full Professors 26 3.01 0.32 0.06 Associate Profs 29 2.89 0.22 0.04 Assistant Profs 28 2.89 0.29 0.05 Lecturers 18 3.00 026 0.06 Instructors 4 3.10 0.38 0.19 Total 105 2.95 0.28 0.03 Table 5. Pedagogical Knowledge Scores by Rank for Effective Practices   Metacognitive Awareness of Pedagogical Knowledge Of particular interest was whether faculty demonstrated metacognitive awareness of their own levels of pedagogical knowledge. As noted in the method section above, metacognition, or metacognitive awareness, has typically been measured by assessing respondents’ level of confidence in their own knowledge or ability, then assessing them on an objective test of that knowledge or ability and conducting a correlational analysis to determine whether their self-beliefs correspond with actual knowledge or performances levels on the constructs. For this analysis, we conducted Pearson correlations utilising the faculty’s scores on the Confidence in Pedagogical Knowledge Scale and their scores on the Pedagogical Knowledge Items assessment to test for a relationship between the two.  When measuring the metacognitive awareness of pedagogy for all faculty who participated, a Pearson correlation revealed a weak non-significant negative correlation, r (107) = -.157, p = .105. Accurate metacognitive awareness is only shown when there is a significant positive correlation between one’s confidence or views of their own knowledge or ability and one’s actual levels in those areas. A negative correlation or no correlation (non-significant) would indicate poor metacognition in that the respondents’ views of their own knowledge or abilities do not correspond with their actual levels in the areas. That was the case across all faculty in the study. In regard to faculty in the College of Education, who we hypothesised would have greater pedagogical knowledge and awareness of their own levels of expertise in the area, we again found no correlation between their self-reported level of expertise and their actual level of pedagogical knowledge, r (14) = .003, p = .992. We also tested for differences in metacognitive awareness between faculty based on academic rank; however, no significant differences were found on this variable, with more experienced faculty showing no greater awareness in this regard than newer faculty.  Overall, faculty reported high confidence that their teaching was heavily influenced by their familiarity with effective teaching practices (81.2% agreement) and did not believe that other faculty members had better knowledge of those practices (87.7% agreement). While faculty did show recognition of effective practices, they also endorsed myths and misconceptions regarding pedagogy, wherein they scored the poorest.   5. Discussion The goals of this study were to assess pedagogical knowledge, as a function of both effective practices and misconceptions, of faculty at a large teaching-centred state university and to gauge their metacognitive awareness regarding their own instructional practices. In addition, we sought to discern whether these outcomes varied according to academic discipline, as represented by college, and by academic rank, which often can serve as a proxy for experience in higher education.  The data present a picture of faculty who tended to characterise all the pedagogical approaches they were presented with as being effective, regardless of whether those approaches were myths or misconceptions or were actually effective strategies. This resulted in a dynamic in which faculty correctly classified effective practices as being beneficial to learning but also incorrectly endorsed myths and misconceptions. This suggests that faculty at the university in this study are often incorrect in their assumptions regarding ineffective practices and mistakenly believe practices debunked by research benefit students. This finding is consistent with recent research indicating that educators continue to report holding beliefs in neuromyths despite a wealth of evidence to the contrary (Macdonald et al., 2017; Newton & Salvi, 2020; Rousseau, 2021).  Faculty answered the majority of the pedagogical items correctly, with most incorrect answers coming in the form of endorsing myths and misconceptions as effective practices. For example, two-thirds of faculty believed unguided discovery learning and learning styles to be beneficial to student learning, while more than half were incorrect about multitasking and digital natives. The only misconception that the majority of faculty correctly characterised was the use of extrinsic motivators, with most classifying rewards systems as ineffective long-term instructional strategies. For the effective practices, the vast majority of faculty correctly recognised the efficacy of direct instruction, retrieval practice, and spacing, with a smaller majority recognising the effectiveness of summarisation and dual coding. While faculty demonstrated an understanding that certain research-supported strategies are effective, many also believed several of the myths and misconceptions are beneficial. This may indicate that instructors are not successfully differentiating between effective and ineffective practices, a finding consistent with prior research (Rohrer & Pashler, 2010), suggesting that faculty may often unknowingly choose ineffective methods. If this is the case, then it is likely that many unproductive teaching methods may be used in college classrooms as well at the public-school level, where similar findings have emerged (Lethaby & Harries, 2016). The results were somewhat less clear regarding the pedagogical knowledge of faculty according to rank. We did not have specific expectations about how faculty at different ranks would perform. It was possible that higher ranking members may show stronger pedagogical knowledge due to having more experience in instruction, although considering that some lower ranked faculty may have been employed at other universities previously or held multiple postdoc positions prior, higher ranked faculty may not universally have been more experienced. Another possibility would be that newer faculty would be more familiar with learning science than longer tenured ones because they had more recently been exposed to the latest research when completing their doctorates. The former scenario appeared more likely to be the case than the latter, with full professors and associate professors outperforming assistant professors in overall pedagogical knowledge. While faculty of all ranks performed similarly regarding identifying effective instructional practices, full professors were less likely to endorse myths and misconceptions than assistant professors and lecturers. Thus, faculty across ranks appeared to hold similar views regarding effective practices. Still, the more newly hired or less experienced faculty were more likely to endorse myths and misconceptions than tenured faculty. One possibility based on these results is that newly minted Ph.D. holders are not being exposed to accurate learning science in their doctoral programmes and that myths and misconceptions are proliferating at that level. The most notable contributions of this study emerged through analyses resulting in non-significant findings. Surprisingly, College of Education faculty, whose academic discipline is entirely rooted in pedagogy, did not demonstrate better understanding of these research-based, well-established instructional concepts than faculty from other disciplines. With the exception of unguided discovery learning, education faculty were just as likely to endorse myths and misconceptions about learning and no more likely to recognise practices supported by well-established research. In fact, while the difference was not statistically significant, education faculty scored lower in pedagogical knowledge than the faculty from each of the other colleges.  Additionally, faculty across the university showed a lack of metacognitive awareness in regard to their pedagogical knowledge. The absence of a positive correlation between faculty members’ confidence in their knowledge of teaching and learning practices and their actual knowledge of those practices revealed a limited understanding of their own knowledge in the area. In short, being confident in their knowledge of pedagogy was not related to actual high levels of knowledge on the topic. This dynamic was true of faculty from the College of Education as well. Individuals with low levels of knowledge yet high levels of confidence in that knowledge are unlikely to change their views and seek ways to improve their knowledge or performance. In this case, such faculty would be unlikely to improve upon or learn about new instructional techniques over time.  The dichotomy between actual knowledge and confidence in one’s own knowledge was underscored by the preponderance of faculty, nearly 88%, who believed that others at the university did not know more about pedagogical approaches than themselves. In one respect, this demonstrates high self-efficacy in their teaching practices, but it may also be cause for concern. Considering that the vast majority of faculty participants were trained in and taught in fields in which learning science was not central to their discipline, a belief that there were no others at the university with more expertise in pedagogy could lead to circumstances when faculty are disinterested in pursuing more effective methods or learning about emerging research on teaching practices. This situation may mirror that of K-12 education when public school teachers may receive limited instruction in learning science and ultimately default to relying on anecdotal experiences to guide their practice.  This particular dynamic, high levels of confidence paired with low levels of knowledge, represents the well-established Dunning-Kruger effect (Kruger & Dunning, 1999). The Dunning-Kruger effect most commonly appears in respondents with the lowest levels of knowledge on a subject when those with little expertise overclaim and believe their knowledge to be high in a field they have limited familiarity with (Atir, et al., 2015; Pennycook, et al., 2017). One novel contribution of the present study is that the participants cannot be viewed as having the lowest levels of expertise yet still demonstrated what can be consider the Dunning-Kruger effect because confidence far exceeded actual knowledge levels. Faculty almost universally held terminal degrees in an academic field, most likely from national research universities. Therefore, this suggests that the effect does not apply only to those with the lowest levels of knowledge or those whose knowledge is outside their field. In this case, all participants had relatively high levels of experience and some knowledge of the learning sciences. However, it appears that learning science on pedagogy is specialised enough that even those with extensive experience teaching may have limited levels of knowledge of it while at the same time having confidence with their familiarity with the subject. It is important to note that there is no reason to conclude that the issues revealed here are unique or specific to the university that served as the focus of the study, as the participants were not educated at this institution. The vast majority of faculty at the university received their Ph.Ds from what are considered to be high-level research universities, categorised under the Carnegie classification system as “Doctoral Universities: Very High Research Activity”. Thus, it is likely that their cohort members educated at the same universities who secured positions teaching at other institutions would hold similar views. Considering this situation, these results may indicate much wider issues in the preparation of university faculty for teaching purposes. Limitations One limitation of the present study was that the assessment designed to measure knowledge of pedagogical practices was restricted to ten items. There were two primary reasons for this. First, as an extension of a much larger survey instrument, we were limited in the number of items we were able to introduce. It can certainly be argued that ten items are not enough to capture a range of pedagogical practices, without enough breadth to ascertain the full scope of possibilities, though using a mixture of effective practices and myths/misconceptions allowed for more nuanced analyses of pedagogical knowledge.  The second reason for the limited number of pedagogical items was that it was somewhat challenging to identify approaches for which there was an abundance of research evidence to the extent that we could consider the science to be settled and which were not confined to certain developmental levels or content areas. For instance, a practice such as interleaving is well-supported across age ranges, but research has most commonly linked it to math instruction (Rohrer & Pashler, 2010), and it is less clear how it may apply to language arts or history instruction. Likewise, practices like the use of manipulatives have been shown to be effective, but mostly for young children and in somewhat narrow contexts (Willingham, 2017). For these reasons, the measure of pedagogical knowledge was limited to a short list of practices.  Another potential limitation was the question of how settled is settled? Unlike the hard sciences, research in social sciences rarely approaches that level of consensus and there may be continued debate and contradictory findings for decades. Therefore, we chose concepts for which we determined there was the greatest amount of consensus among researchers and the greatest abundance of robust empirical evidence. We chose to draw upon concepts such as those compiled by organisations like the American Psychological Association (APA, 2015) or a compilation of seminal studies in educational psychology (Kirschner & Hendrick, 2020), each of which is supported by hundreds of studies. Therefore, despite academics who may dispute our choices of concepts, we are confident in the validity and reliability of the ones we chose to include, though the criteria limited our options. Additionally, the reliability of the Confidence in Pedagogical Knowledge Scale was not as strong as we would have liked (α = .79). Reliability was further tested by removing each of the five items to ascertain whether the scale proved to be more reliable in any combination as a 4-item scale, but the highest reliability was achieved by including all five items. Future researchers could improve upon this scale by introducing stronger items or replacing ones from the current scale. Nonetheless, while ideally the reliability of the present scale would have been stronger, it was acceptable. A final limitation was that the data were collected from just one large state university and the sample was restricted to 107 faculty members. But as noted above, these faculty members were trained at a wide range of different universities, the majority of them national research universities, so we do not view the results as being limited to a reflection of the one institution that was the focus of the study. However, we recommend that future researchers extend their samples to several institutions at a variety of levels, such as community colleges, comprehensive, and research universities, as well as teachers in K-12 settings. Implications In sum, the data suggest a situation in which faculty do not have a strong understanding of which pedagogical approaches are effective in contrast to those which are not according to learning science, yet they feel relatively confident that they do. Faculty were able to correctly identify effective practices but could not distinguish those from myths and misconceptions, and the widespread endorsement of certain misconceptions such as learning styles and discovery learning indicate that perhaps faculty may not be employing the most efficacious teaching methods. The limited levels of metacognitive awareness further suggests that such faculty members may be unlikely to search out better methods if they are unaware that a variety of concepts they endorse do not benefit learning. It is understandable that those faculty members from the colleges of Arts and Letters, Business, Math, and Sciences would not be as aware of the cognitive science supporting different learning strategies, as their doctoral preparation was focused on promoting the highest levels of knowledge and research connected to their specific discipline. However, the issue is more complex for those in the College of Education. The remedies for faculty from colleges besides Education are straightforward. University programmes that prepare Ph.D. candidates to be college faculty can ensure that they include courses structured to familiarise future faculty with recent research on human learning and how cognitive science informs teaching practices in real-world applications. Additionally, most universities have centres for teaching and learning designed to serve as professional development for faculty, particularly in the area of pedagogy. These centres should endeavour to provide the most recent and accurate information and avoid endorsing concepts shown by learning science to be misconceptions. These two options are often instituted at many universities. Our results suggest that they must be done better. The issues regarding remedies for colleges of education are more daunting. Pedagogy is the central content of these programmes. These colleges of education prepare K-12 teachers, K-12 administrators, district and state-level administrators, and very often university administrators who receive their doctorates in leadership programmes housed in colleges of education. If education professors are not aware of the most well-established teaching and learning methods, then their students will also not become aware of them, including K-12 teachers and administrators at each level. And because nearly all of the education faculty at the university in this study received their Ph.D.s from a range of R-1 institutions, it suggests that the issue may be widespread. State and national accrediting bodies may not sufficiently account for professors’ level of pedagogical knowledge, which may partially explain why misinformation about learning such as learning styles is included in teacher licensure materials in the majority of states (Furey, 2020). Questions have arisen about the efficacy of training provided by colleges of education (Asher, 2018; Goldhaber, 2019), and our results appear to underscore those concerns. These results, if they were replicated in further studies, should compel those in academia to rethink and perhaps overhaul the foundation of colleges of education, in addition to making substantial alterations to accrediting bodies for colleges of education and state boards of education in the U.S. These organisations may not be meeting their primary responsibility of ensuring that their graduates adequately understand teaching and learning practices. It should not be sufficient to simply train teachers to function within a system. They should be taught about what works to enhance student learning in order to become better teachers, but that is not possible if those teaching them at the university level do not have a full understanding of fundamental principles of learning. A first step in the process would be for universities and colleges to acknowledge the potential limitation and modify curricula to ensure that adequate attention was devoted to coursework on how human learning occurs. This would necessitate employing personnel with the necessary expertise to teach such courses. This may entail universities hiring instructors with specific skill sets that may not be prioritised at the moment in order to advance that particular knowledgebase. For existing faculty who may not have had the benefit of coursework in the learning sciences themselves, professional development can be offered. This may consist of learning modules where faculty read and discuss research pertaining to learning outcomes and instructional strategies from fields such as cognitive psychology, neuroscience, educational psychology, and cognitive science. Ideally, these would incorporate the very methods being studied and provide videos and demonstrations of the strategies being used in classroom settings. This is a general overview of initial steps that may be taken, but a key point is that this training should ultimately be introduced to faculty, to those in graduate courses who are likely to one day be faculty or involved in higher education, and to those who are preparing for roles in K-12 education.   6. Conclusion This study, along with a growing body of research, suggests that instructors are not currently being adequately trained in the cognitive science that informs teaching and learning. If myths and misconceptions about learning continue, those instructors will be unlikely to optimise student learning. By acquiring a more comprehensive understanding of learning science, university instructors will have the opportunity to employ teaching practices that have been shown to enhance cognition, such as methods to increase retention of information, problem-solving skills, or procedural knowledge. There is little doubt that their students would benefit from the use of research-based practices. This is especially true of faculty in colleges of education who prepare K-12 teachers and administrators at a variety of levels because an understanding of such approaches could then be transferred to those who would put them to use in classrooms with younger learners. We recommend that universities prioritise an emphasis on learning science to ensure that the candidates they train for teaching positions are aware of effective teaching and learning practices and can distinguish between those and ineffective ones, which should ultimately enhance educational outcomes for all involved.   About the Authors Joshua A. Cuevas ORCID ID: 0000-0003-3237-6670 University of North Georgia [email protected] Bryan L. Dawson University of North Georgia [email protected] Gina Childers Texas Tech University [email protected]   References Alloway, T. P., Gathercole, S. E., & Pickering, S. J. (2006). Verbal and visuospatial short-term and working memory in children: Are they separable? Child Development, 77(6), pp. 1698–1716. https://doi.org/10.1111/j.1467-8624.2006.00968.x. American Psychological Association, Coalition for Psychology in Schools and Education. (2015). Top 20 principles from psychology for preK–12 teaching and learning. Retrieved from www.apa.org/ed/schools/cpse/top-twenty-principles.pdf. Anderson, M. C., & Thiede, K. W. (2008). Why do delayed summaries improve metacomprehension accuracy? Acta Psychologica, 128, pp. 110-118. https://doi.org/10.1016/j.actpsy.2007.10.006. Asher, L. (2018, April 8). How ed schools became a menace: They trained an army of bureaucrats who are pushing the academy toward ideological fundamentalism. The Chronicle of Higher Education. https://www.chronicle.com/article/How-Ed-Schools-Became-a-Menace/243062. Atir, S., Rosenweig, E., and Dunning, D. (2015). When knowledge knows no bounds: Self-perceived expertise predicts claims of impossible knowledge. Psychological Science, 26(8), pp. 1295 – 1303. https://doi.org/10.1177/0956797615588195. Bangert–Drowns, R. L., Kulik, J. A., & Kulik, C. L. C. (1991). Effects of frequent classroom testing. Journal of Educational Research, 85, pp. 89–99. https://doi.org/10.1080/00220671.1991.10702818. Bennett, S., Maton, K., & Kervin, L. (2008). The ‘digital natives’ debate: A critical review of the evidence. British Journal of Educational Technology, 39(5), pp. 775-786). https://doi.org/10.1111/j.1467-8535.2007.00793.x. Bogaerds-Hazenberg, S., Evers-Vermeul, J., & van den Bergh, H. (2020). A meta-analysis on the effects of text structure instruction on reading comprehension in the upper elementary grades. Reading Research Quarterly, 56(3), pp. 435-462. https://doi.org/10.1002/rrq.311. Brown, A. L. & Day, J.D. (1983). Macrorules for summarizing texts: The development of expertise. Journal of Verbal Learning and Verbal Behavior, 22, pp. 1-14. https://doi.org/10.1016/S0022-5371(83)80002-4. Carpenter, S. K., Cepeda, N. J., Rohrer, D., Kang, S. H. K., & Pashler, H. (2012). Using spacing to enhance diverse forms of learning: Review of recent research and implications for instruction. Educational Psychology Review, 24(3), pp. 369–378. https://doi.org/10.1007/s10648-012-9205-z. Carr, E., & Ogle, D. (1987). K-W-L Plus: A strategy for comprehension and summarization. Journal of Reading, 30(7), pp. 626-631. https://eric.ed.gov/?id=EJ350560. Cepeda, N. J., Coburn, N., Rohrer, D., Wixted, J. T., Mozer, M. C., & Pashler, H. (2009). Optimizing distributed practice: Theoretical analysis and practical implications. Experimental Psychology, 56(4), pp. 236–246. https://doi.org/10.1027/1618-3169.56.4.236. Chang, K., Sung, Y. & Chen, I. (2002). The effect of concept mapping to enhance text comprehension and summarization. The Journal of Experimental Education, 71(1), pp. 5-23. https://doi.org/10.1080/00220970209602054. Clark, J., Kirschner, P. A., Sweller, J. (2012). Putting students on a path to learning: The case for fully guided instruction. American Educator, 36(1), pp. 6-11. https://eric.ed.gov/?id=EJ971752. Clark, J. M., & Paivio, A. (1991). Dual coding theory and education. Educational Psychology Review, 3(3), pp. 149-210.  https://doi.org/10.1007/BF01320076. Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning & Skills Research Centre, London.  Cuevas, J. A. (2015). Is learning styles-based instruction effective? A comprehensive analysis of recent research on learning styles. Theory and Research in Education, 13(3), pp. 308-333. https://doi.org/10.1177/1477878515606621. Cuevas, J. A. (2016a). An analysis of current evidence supporting two alternate learning models: Learning styles and dual coding. Journal of Educational Sciences & Psychology, 6(1), pp. 1-13. https://www.researchgate.net/publication/301692526. Cuevas, J. A. (2016b). Cognitive psychology’s case for teaching higher order thinking. Professional Educator, 15(4), pp. 4-7. https://www.academia.edu/28947876. Cuevas, J. A. (2017). Visual and auditory learning: Differentiating instruction via sensory modality and its effects on memory. In Student Achievement: Perspectives, Assessment and Improvement Strategies (pp. 29 – 54). Nova Science Publishers. ISBN-13: 978-1536102055. Cuevas, J. A. (2019). Addressing the crisis in education: External threats, embracing cognitive science, and the need for a more engaged citizenry. In Nata, R.V. (Ed.), Progress in Education (pp. 1 – 38). (Vol. 55). Nova Science Publishers. ISBN: 978-1-53614-551-9. Cuevas, J. A., Childers, G., & Dawson, B. L. (2023). A rationale for promoting cognitive science in teacher education: Deconstructing prevailing learning myths and advancing research-based practices. Trends in Neuroscience and Education. https://doi.org/10.1016/j.tine.2023.100209. Cuevas, J. A., & Dawson, B. L. (2018). A test of two alternative cognitive processing models: Learning styles and dual coding. Theory and Research in Education, 16(1), pp. 40-64.  https://doi.org/10.1177/1477878517731450. Dawson, B. L., & Cuevas, J. A. (2019). An assessment of intergroup dynamics at a multi-campus university: One university, two cultures. Studies in Higher Education, 45(6), pp. 1047 – 1063. https://doi.org/10.1080/03075079.2019.1628198. Day, J. (1986). Teaching summarization skills: Influences of student ability level and strategy difficulty. Cognition and Instruction, 3(3), pp. 193-210. https://doi.org/10.1207/s1532690xci0303_3. Deci, E., Vallerand, R., Pelletier, L., & Ryan, R. (1991). Motivation and education: The self-determination perspective. Educational Psychologist, 26(3&4), pp. 325-346. https://doi.org/10.1080/00461520.1991.9653137. Deci, E., Koestner, R., & Ryan, R. (2001). Extrinsic rewards and intrinsic motivation in education: Reconsidered once again. Review of Educational Research, 7(1), pp. 1-27. https://doi.org/10.3102/00346543071001001. Demirbilek, M., & Talan, T. (2017). The effect of social media multitasking on classroom performance. Active Learning in Higher Education, 19(2), pp. 117-129. https://doi.org/10.1177/1469787417721382. Di Virgilio, G. & Clarke, S. (1997). Direct interhemispheric visual input to human speech areas. Human Brain Mapping, 5, pp. 347-354. https://doi.org/10.1002/(SICI)1097-0193(1997)5:5<347::AID-HBM3>3.0.CO;2-3. Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), pp. 4-58. https://doi.org/10.1177/1529100612453266. Dunlosky, J., Rawson, K. A., & Middleton, E. L. (2005). What constrains the accuracy of metacomprehension judgments? Testing the transfer-appropriate-monitoring and accessibility hypotheses. Journal of Memory and Language, 52, pp. 551-565. https://doi.org/10.1016/j.jml.2005.01.011. Fiebach, C. J. & Friederici, A. D. (2003). Processing concrete words: fMRI evidence against a specific right-hemisphere involvement. Neuropsychologia, 42(1), pp. 62 -70. https://doi.org/10.1016/S0028-3932(03)00145-3.  Furey, W. (2020). The stubborn myth of “learning styles”: State teacher-license prep materials peddle a debunked theory. Education Next, 20(3), pp. 8-12. https://www.educationnext.org/. Furtak, E. M., Seidel, T., Iverson, H., & Briggs, D. C. (2012). Experimental and quasi-experimental studies of inquiry-based science teaching: A meta-analysis. Review of Educational Research, 82, pp. 300 – 329. https://doi.org/10.3102/0034654312457206. Gill, A., Trask-Kerr, K., & Vella-Brodrick, D. (2021). Systematic review of adolescent conceptions of success: Implications for wellbeing and positive education. Educational Psychology Review, 33, pp. 1553-1582. https://doi.org/10.1007/s10648-021-09605-w. Goldhaber, D. (2019). Evidence-based teacher preparation: Policy context and what we know. Journal of Teacher Education, 70(2), pp. 90–101. https://doi.org/10.1177/0022487118800712. Hagaman, J. L., Casey, K. J., Reid, R. (2016). Paraphrasing strategy instruction for struggling readers. Preventing School Failure: Alternative Education for Children and Youth, 60, pp. 43–52. http://dx.doi.org/10.1080/1045988X.2014.966802. Higher Education Research Institute. (2015, October). HERI research brief. https://www.heri.ucla.edu/briefs/DLE/DLE-2015-Brief.pdf. Hite, R., Jones, M.G., Childers, G., Chesnutt, K., Corin, E., & Pereyra, M. (2017). Pre-service and in-service science teachers’ technological acceptance of 3D, haptic-enabled virtual reality instructional technology. Electronic Journal of Science Education, 23(1), pp. 1-34. https://eric.ed.gov/?id=EJ1203195. Hodes, C. L. (1998). Understanding visual literacy through visual informational processing. Journal of Visual Literacy, 18(2), pp. 131-136. https://doi.org/10.1080/23796529.1998.11674534. Jones, C., Ramanau, R., Cross, S., & Healing, G. (2010). Net generation or digital natives: Is there a distinct new generation entering university? Computers & Education, 54(3), pp. 722-732. https://doi.org/10.1016/j.compedu.2009.09.022. Junco, R. & Cotten, S. (2011). No A 4 U: The relationship between multitasking and academic performance. Computers & Education, 59(2), pp. 505-514. https://doi.org/10.1016/j.compedu.2011.12.023. Just, M. A., & Carpenter, P. A. (1992.) A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 99, pp. 122-149. https://doi.org/10.1037/0033-295X.99.1.122. Karpicke, J. D., & Grimaldi, P. J. (2012). Retrieval-based learning: A perspective for enhancing meaningful learning. Educational Psychology Review, 24(3), pp. 401–418. https://doi.org/10.1007/s10648-012-9202-2. Kintsch, W., & van Dijk, T. (1978). Toward a model of text comprehension and production. Psychological Review, 85, pp. 363-394. https://doi.org/10.1037/0033-295X.85.5.363. Kirschner, P. A., & De Bruyckere, P. (2017). The myths of the digital native and the multitasker. Teaching and Teacher Education, 67, pp. 135-142. https://doi.org/10.1016/j.tate.2017.06.001. Kirschner, P. A. & Hendrick, C. (2020). How learning happens: Seminal works in educational psychology and what they mean in practice. New York, NY: Rutledge. Kirschner, P. A., Sweller, J., Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist. 46(2), pp. 75-86. https://doi.org/10.1207/s15326985ep4102_1. Kirschner, P. A. & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education, Educational Psychologist, 48(3), pp. 169-183. https://doi.org/10.1080/00461520.2013.804395. Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), pp. 1121–1134. https://doi.org/10.1037/0022-3514.77.6.1121. Larsen, D. P. (2018). Planning education for long-term retention: The cognitive science and implementation of retrieval practice. Seminars in Neurology, 38(4), pp. 449–456. https://doi.org/10.1055/s-0038-1666983. Latimier, A., Peyre, H., Ramus, F. (2021). A meta-analytic review of the benefit of spacing out retrieval practice episodes on retention. Educational Psychology Review, 33, pp. 959-987. https://doi.org/10.1007/s10648-020-09572-8. Lee, F. J., & Taatgen, L. (2002). ‘Multitasking as Skill Acquisition’, CogSci’02: Proceedings of the Cognitive Science Society, August 2002. Leopold, C., & Leutner, D. (2012). Science text comprehension: Drawing, main idea selection, and summarizing as learning strategies. Learning and Instruction, 22, pp. 16–26. https://doi.org/10.1016/j.learninstruc.2011.05.005. Lethaby, C., & Harries, P. (2016). Learning styles and teacher training: Are we perpetuating neuromyths? ELT Journal, 70(1), pp. 16–27. https://doi.org/10.1093/elt/ccv051. Lorch, R., & Pugzles Lorch, E. (1996). Effects of headings on text recall and summarization. Contemporary Educational Psychology, 21(3), pp. 261-278. https://doi.org/10.1006/ceps.1996.0022. Macdonald, K., Germine, L., Anderson, A., Christodoulou, J., & McGrath, L. M. (2017). Dispelling the myth: Training in education or neuroscience decreases but does not eliminate beliefs in neuromyths. Frontiers in Psychology. 8:1314. https://doi.org/10.3389/fpsyg.2017.01314. Margaryan, A., Littlejohn, A., & Vojt, G. (2011). Are digital natives a myth or reality? University students’ use of digital technologies. Computers & Education, 56(2), pp. 429-440. https://doi.org/10.1016/j.compedu.2010.09.004. Mayer, R. (2004). Should there be a three-strikes rule against pure discovery learning? The case for guided methods of instruction. American Psychologist, 59(1), pp. 14-19. https://doi.org/10.1037/0003-066X.59.1.14. McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger, H. L. (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology, 103(2), pp. 399–414. https://doi.org/10.1037/a0021782. Mercimek B., Akbulut, Y., Dönmez, O., & Sak, U. (2020). Multitasking impairs learning from multimedia across gifted and non-gifted students. Educational Technology, Research and Development, 68(3), pp. 995-1016. https://doi.org/10.1007/s11423-019-09717-9. Nancekivell, S. E., Sun, X., Gelman, S. A., & Shah, P. (2021). A slippery myth: How learning style beliefs shape reasoning about multimodal instruction and related scientific evidence. Cognitive Science. https://doi.org/10.1111/cogs.13047. National Research Council. (2018). How people learn II: Learners, contexts, and cultures. Washington, DC: National Academy Press. Newton, P.M., & Salvi, A. (2020). How common is belief in the learning styles neuromyth, and does it matter? A pragmatic systematic review. Frontiers in Education. 5:602451.  https://doi.org/10.3389/feduc.2020.602451. Pashler, H., McDaniel, M., Rohrer, D., Bjork, R. (2009). Learning styles: Concepts and evidence. Psychological Science in the Public Interest 9: pp. 105–119. https://doi.org/10.1111/j.1539-6053.2009.01038.x. Paivio, A. (1969). Mental imagery in associative learning and memory. Psychological Review, 76, pp. 241-263. Paivio, A. (1986). Mental representations. New York, NY: Oxford University Press. Pennycook, G., & Rand, D. G. (2019). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of Personality, 88(2). pp. 185-200. https://doi.org/10.1111/jopy.12476. Pennycook, G., Ross, R. M., Koehler, D. J., & Fugelsang, J. A. (2017). Dunning-Kruger effects in reasoning: Theoretical implications of the failure to recognize incompetence. Psychonomic Bulletin and Review 24(6). pp. 1774-1784. https://doi.org/10.3758/s13423-017-1242-7. Perin, D., Lauterbach, M., Raufman, J., & Kalamkarian, H. S. (2017). Text-based writing of low-skilled postsecondary students: Relation to comprehension, self-efficacy and teacher judgments. Reading and Writing, 30(4), pp. 887-915. https://doi.org/10.1007/s11145-016-9706-0. Prensky, M. (2001). Digital natives, digital immigrants part 1. On the Horizon, 9(5), pp. 1-6. http://dx.doi.org/10.1108/10748120110424816. Prensky, M. (2012). Digital natives to digital wisdom: Hopeful essays for 21st century learning. Thousand Oaks, CA: Corwin. Riener, C. & Willingham, D. (2010). The myth of learning styles. Change 42(5): pp. 32-35.  Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1), pp. 20-27. https://doi.org/10.1016/j.tics.2010.09.003.  Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning taking memory tests improves long-term retention. Psychological Science, 17(3), pp. 249-255. https://doi.org/10.1111/j.1467-9280.2006.01693.x. Rogers, J., & Cheung, A. (2020). Pre-service teacher education may perpetuate myths about teaching and learning. Journal of Education for Teaching, 46(3), pp. 417-420. https://doi.org/10.1080/02607476.2020.1766835. Rogowsky, B.A., Calhoun, B.M., & Tallal, P. (2015). Matching learning style to instructional method: Effects on comprehension. Journal of Educational Psychology 107(1): pp. 64–78. https://doi.org/10.1037/a0037478. Rogowsky, B.A., Calhoun, B.M., & Tallal, P. (2020). Providing instruction based on students’ learning style preferences does not improve learning. Frontiers in Psychology. 11:164. https://doi.org/10.3389/fpsyg.2020.00164. Rohrer, D., & Pashler, H. (2010). Recent research on human learning challenges conventional instructional strategies. Educational Researcher, 39(5), pp. 406-412.  https://doi.org/10.3102/0013189X1037477. Rohrer, D. & Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education 46(7): pp. 634–635. https://eric.ed.gov/?id=ED535732. Rosenshine, B. V. (2012). Principles of instruction: Research-based strategies that all teachers should know. American Educator, 36(1), pp. 12-19. https://eric.ed.gov/?id=EJ971753. Rousseau, L. (2021). Interventions to dispel neuromyths in educational settings—A review. Front. Psychol. 12:719692. https://doi.org/10.3389/fpsyg.2021.719692. Ryan, R., & Deci, E. (2000). Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemporary Educational Psychology, 25, pp. 54-67. https://doi.org/10.1006/ceps.1999.1020. Ryan, R. M., & Deci, E. L. (2009). Promoting self-determined school engagement: Motivation, learning, and well-being. In K. R. Wenzel & A. Wigfield (Eds.), Handbook of motivation at school (pp. 171–195). Routledge/Taylor & Francis Group. Rowland, C. A. (2014). The effect of testing versus restudy on retention: A meta-analytic review of the testing effect. Psychological Bulletin, 140(6), pp. 1432-1463. https://doi.org/10.1037/a0037559. Sadoski, M., Goetz, E.T., & Avila E. (1995). Concreteness effects in text recall: Dual coding or context availability? Reading Research Quarterly 30(2): pp. 278-288. https://doi.org/10.2307/748038. Sana, F., Weston, T., & Cepeda, N. (2012). Laptop multitasking hinders classroom learning for both users and nearby peers. Computers & Education, 62, pp. 24-31. https://doi.org/10.1016/j.compedu.2012.10.003. Scott, C. (2010). The enduring appeal of ‘learning styles’. Australian Journal of Education 54(1): pp. 5-17. https://doi.org/10.1177/000494411005400102. Seabrook, R., Brown, G. D. A., & Solity, J. E. (2005). Distributed and massed practice: From laboratory to classroom. Applied Cognitive Psychology, 19(1), pp. 107-122. https://doi.org/10.1002/acp.1066. Sharps, M. J., & Price, J. L. (1992). Auditory imagery and free recall. The Journal of General Psychology, 119(1), pp. 81-87. https://doi.org/10.1080/00221309.1992.9921160. Shelton, A., Lemons, C., & Wexler, J. (2020). Supporting main idea identification and text summarization in middle school co-taught classes. Intervention in School and Clinic, 56(4), pp. 217-223. https://doi.org/10.1177/1053451220944380. Smith, J., Skrbis, Z., & Western, M. (2012). Beneath the ‘Digital Native’ myth: Understanding young Australians’ online time use. Journal of Sociology, 49(1), pp. 97-118. https://doi.org/10.1177/1440783311434856. Solis, M., Ciullo, S., Vaughn, S., Pyle, N., Hassaram, B., & Leroux, A. (2012). Reading comprehension interventions for middle school students with learning disabilities: A synthesis of 30 years of research. Journal of Learning Disabilities, 45(4), pp. 327-340. https://doi.org/10.1177/0022219411402691. Stevens, E. A., Park, S., Vaughn, S. (2019). A review of summarizing and main idea interventions for struggling readers in grades 3 through 12: 1978-2016. Remedial and Special Education, 40, pp. 131-149. https://doi.org/10.1177/0741932517749940. Stockard, J., Wood, T.W., Coughlin, C., & Khoury, C. R. (2018). The effectiveness of direct instruction curricula: A meta-analysis of a half century of research. Review of Educational Research. 88(4), pp. 479-507. https://journals.sagepub.com/doi/10.3102/0034654317751919. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), pp. 257-285. https://doi.org/10.1207/s15516709cog1202_4. Sweller, J. (2011). Human cognitive architecture: Why some instructional procedures work and others do not. In K. Harris, S. Graham, & T. Urdan (Eds.), APA Educational Psychology Handbook (Vol. 1). Washington, DC: American Psychological Association. Sweller, J. (2016). Working memory, long-term memory, and instructional design. Journal of Applied Research in Memory and Cognition, 5. pp. 360-367. https://doi.org/10.1016/j.jarmac.2015.12.002. Sweller, J., van Merriënboer, J. J. G., & Paas, F. (1998). Cognitive architecture and instructional design. Educational Review, 10. pp. 251-296. https://doi.org/10.1023/A:1022193728205. Thiede, K. W., Anderson, M., & Therriault, D. (2003). Accuracy of metacognitive monitoring affects learning of texts. Journal of Educational Psychology, 95, pp. 66-73. https://doi.org/10.1037/0022-0663.95.1.66. Torrijos-Muelas, M., González-Víllora, S., & Bodoque-Osma, A. R. (2021). The persistence of neuromyths in the educational settings: A systematic review. Front. Psychol. 11:591923. https://doi.org/10.3389/fpsyg.2020.591923. Vasconcellos, D., Parker, P. D., Hilland, T., Cinelli, R., Owen, K. B., Kapsal, N., Lee, J., Antczak, D., Ntoumanis, N., Ryan, R. M., & Lonsdale, C. (2020). Self-determination theory applied to physical education: A systematic review and meta-analysis. Journal of Educational Psychology, 112(7), pp. 1444-1469. https://doi.org/10.1037/edu0000420. Wang, S., Hsu, H., Campbell, T., Coster, D., & Longhurst, M. (2014). An investigation of middle school science teachers and students use of technology inside and outside of classrooms: Considering whether digital natives are more technology savvy than their teachers. Educational Technology Research and Development, 62(6), pp. 637-662. https://doi.org/10.1007/s11423-014-9355-4. Wang, Z. & Tchernev, J. (2012). The “myth” of media multitasking: Reciprocal dynamics of media multitasking, personal needs, and gratifications. Journal of Communication, 62, pp. 493-513. https://doi.org/10.1111/j.1460-2466.2012.01641.x. Welcome S.E., Paivio, A., McRae, K., & Joanisse, M. F. (2011). An electrophysiological study of task demands on concreteness effects: Evidence for dual coding theory. Experimental Brain Research 212(3): pp. 347-358. https://doi.org/10.1007/s00221-011-2734-8. Westby, C., Culatta, B., Lawrence, B., & Hall-Kenyon, K. (2010). Summarizing expository texts. Topics in Language Disorders, 30(4), 275287. https://eric.ed.gov/?id=EJ906737. Willingham, D. (2017). Ask the cognitive scientist: Do manipulatives help students learn? American Educator, 41(3), 25-40. https://www.aft.org/ae/fall2017/willingham. Willingham, D. (2019). Ask the Cognitive Scientist: Should Teachers Know the Basic Science of How Children Learn? American Educator, 43(2), pp. 30-36. https://www.aft.org/ae/summer2019/willingham. Wood, E., & Zivcakova, L. (2015). Multitasking in educational settings. In L. D. Rosen, N. A. Cheever, & M. Carrier (Eds.), The Wiley handbook of psychology, technology and society (pp. 404-419). Hoboken, NJ: John Wiley & Sons, Inc. Wooten, J. O., & Cuevas, J. A. (2024). The effects of dual coding theory on social studies vocabulary and comprehension in a 5th grade classroom. International Journal on Social and Education Sciences (IJonSES), 6(4), pp. 673-691. https://doi.org/10.46328/ijonses.696. Zhang, W. (2015). Learning variables, in-class laptop multitasking and academic performance: A path analysis. Computers & Education, 81, pp. 82-88. https://doi.org/10.1016/j.compedu.2014.09.012.   Appendix A 10 Pedagogy Knowledge Items Myths: • Learning styles: When classroom instruction is designed to appeal to students’ individual learning styles, students are more likely to learn more. (negatively coded/false) • Discovery learning: Students learn best if they discover information for themselves through activities with minimal guidance when they can randomly experiment on their own. (negatively coded/false) • Extrinsic Motivation: Students’ long-term learning outcomes are likely to be better if teachers and professors stimulate extrinsic motivation through things such as rewards. (negatively coded/false) • Multitasking: Incorporating instruction that involves students in multitasking activities, such as when they think about multiple concepts at once, leads to better learning outcomes. (negatively coded/false) • Digital natives: Students learn better when digital tools are incorporated within instruction practice because students today are naturally more adept at technology due to having used it from such a young age. (negatively coded/false) Effective Approaches: • Dual coding/Imagery for text: It is generally true that students’ memory of lesson content tends to be stronger if visuals and images are used to supplement class lectures, discussions, and readings. (true) • Summarisation: When students summarise content by reducing information into concise, essential ideas, it helps build students’ knowledge and strengthen skills (true) • Practice testing: Quizzes and practice tests using open-ended questions tend to boost learning even if students do not do well on them. (true) • Direct instruction: Direct instruction tends to be an ineffective method for teaching content to students. (negatively coded/false) • Spacing: Students tend to learn more when instruction is delivered via focused, intensive sessions delivered over a brief time period rather than when information is spread out and revisited over longer time spans. (negatively coded/false)   Appendix B 5 metacognition Items (Cronbach’s alpha = .787) • My teaching is heavily influenced by a strong familiarity with the research on how students learn. • I am confident that I am knowledgeable of the best practices to enhance student learning. • I am confident that I utilise the best practices to enhance student learning. • I feel less familiar with teaching strategies and best practices compared to my colleagues. (negatively coded) • I am not always confident that my knowledge of pedagogy and how students learn is strong as it could be. (negatively coded)   Declarations and Compliance with Ethical Standards Ethics Approval: All procedures performed were in accordance with established ethical standards and were approved by the University of North Georgia Institutional Review Board. Informed Consent:  Informed consent was obtained from all required participants included in the study. Competing Interests: This research was not granted-related. The authors declare that they have no conflict of interest. Funding: This research was not funded. Data Availability: The data associated with this study are owned by the University of North Georgia. Interested parties may contact the first or second author regarding availability and access, and requests will be considered on an individual basis according to institutional guidelines. Anonymised data output sheets are available by contacting the first or second author of the study.   Gallery



DuckDuckGo

https://researchandeducation.ro/

Discerning Myths from Methods: University Faculty’s Understanding of Learning Science and Metacognition on Pedagogy

Download PDF Article Download Graphical Abstract Discerning Myths from Methods: University Faculty’s Understanding of Learning Science and Metacognition on Pedagogy     Abstract This study assessed the pedagogical knowledge and metacognitive awareness of pedagogy of faculty (N = 107) at a large state university in the United States. The purpose was to ascertain whether faculty could distinguish effective learning practices from ineffective ones, as determined by empirical research in learning science. Faculty responded to items regarding the efficacy of effective practices and others shown by research to be neuromyths or misconceptions. Faculty across all colleges correctly identified most of the effective practices but also endorsed myths/misconceptions, ultimately showing limited pedagogical knowledge. Tenured faculty showed stronger pedagogical knowledge than newer faculty. Faculty were also assessed on their confidence in their knowledge of pedagogical practices. Respondents demonstrated poor metacognitive awareness as there was no relationship between confidence in pedagogical knowledge and actual pedagogical knowledge. Surprisingly, education faculty scored no better in pedagogical knowledge than faculty of any other college and also showed low metacognitive awareness. Results indicate that universities preparing doctoral students for faculty positions should ensure candidates are exposed to accurate information regarding learning science. The implications for colleges of education are more dire in that they may be failing to prepare candidates in the most essential aspects of the field.   Keywords pedagogy, learning science, cognitive myths, metacognition, faculty, teaching methods JEL Classification I20, I21, I23   1. Introduction Teaching, whether at the university level or K-12 level, is generally considered to be comprised of two broad fields of knowledge: content knowledge and pedagogical knowledge. There is little doubt as to the relative value of content knowledge because instructors would be unable to guide student learning on a subject they had little understanding of themselves. In this scenario the instructor would not be able to recognise or facilitate learning of accurate information, nor would the instructor be able to address misconceptions. Furthermore, without sufficient content knowledge the instructor also cannot effectively scaffold student learning (Kirschner & Hendrick, 2020). In all fields, at the foundational level, content knowledge provides the structural groundwork for teaching and learning. The second of the two broad fields of knowledge essential to teaching, pedagogical knowledge, focuses on how humans learn and the processes that may best facilitate that learning. Pedagogical knowledge may entail sociocultural aspects of learning, such as an understanding of how the learning environment may impact the ways students respond to instruction or how language and culture influence a child’s construction of their own knowledge. Pedagogical knowledge also draws upon cognitive science, or more specifically, what has become known as the learning sciences. This would include knowledge about, for example, how memory processes unfold or how cognitive load may impact a student’s ability to learn unfamiliar information (Sweller, 1988). While there is no debate over the importance of content knowledge in teaching and learning, there is some question regarding whether sufficient attention is devoted to the psychological science on learning that underpins pedagogical knowledge (Cuevas, 2019; Willingham 2019). Certainly, the curricula for those who are preparing to be university instructors in any field, e.g., physics, history, etc., focus predominantly on content knowledge, as PhD programmes are expected to produce professors with the highest-level of content knowledge possible. In contrast, programmes that prepare K-12 teachers attempt a greater balance with more attention paid to how to teach material given the developmental level and background of the learner. However, research over previous decades has shown that many educators maintain beliefs that can be classified as cognitive myths (Kirschner & van Merriënboer, 2013; Rogers & Cheung, 2020; Rousseau, 2021). One large study (N = 3877) found that neuromyths, or misconceptions about education and learning, were prevalent among educators, though the effect was moderated by higher levels of education and having exposure to peer reviewed science (Macdonald et al., 2017).   2. Literature Review A recent systematic review concluded that nearly 90% of educators maintained a belief in neuromyths about learning styles (Newton & Salvi, 2020), and other research has found that neuromyths in general may be particularly resistant to revision among those in teacher education programmes (Rogers & Cheung, 2020). Research also suggests the belief in neuromyths among educators to be so prevalent that interventions are now being proposed to correct those misconceptions (Rousseau, 2021). If misconceptions about how humans learn are as widespread as research seems to indicate, then the issue would be twofold: instructors would likely be incorporating ineffective methods into their lessons due to the mistaken assumption that such methods will benefit student learning, while they also bypass the use of strategies shown to be effective. Thus, in practice, ineffective methods would displace the instructional strategies known to enhance learning (Rohrer & Pashler, 2010). Therefore, we reasoned that an understanding of pedagogy based on learning science would be reflected by two characteristics: 1) whether faculty were familiar with well-established learning concepts, and 2) whether they held misconceptions regarding how students learn. In this study we documented the level of pedagogical knowledge of faculty at a large state university in the United States with a teaching emphasis by assessing their understanding of basic learning concepts and misconceptions about learning. A second, yet essential component of this research, was to measure faculty’s level of metacognitive awareness of their pedagogical knowledge, as an understanding of their own knowledge of how learning occurs is likely to influence their willingness to search out more effective practices (Pennycook et al., 2017). Learning Concepts We identified ten learning concepts, five of which were well-established by research as beneficial to learning and five of which research has debunked and would be classified as myths or misconceptions. In considering the different learning concepts, we selected concepts for which there would be little debate among learning scientists regarding their efficacy. Many options were not included because the extent of their effectiveness may depend on nuances regarding their delivery, and as such may be effective in some circumstances but not in others. For instance, the use of manipulatives shows much promise for students in certain content areas and of certain ages but is not necessarily universally beneficial in all circumstances (Willingham, 2017). The ten we chose were applicable to all content areas and age groups. The following sections constitute succinct summaries of what we deem to be the current research consensus on each of the learning concepts included in the study. More in-depth explanations of each concept can be found in Cuevas, et al., 2023.   Myths and Misconceptions Multitasking  Multitasking is defined as the ability to manage and actively participate in multiple actions simultaneously (Lee & Taatgen, 2002; Wood & Zivcakova, 2015), and a critical analysis of multitasking relies on a framework for working memory and cognitive load (Sweller, 2011). Working memory is an individual’s capacity to store and manipulate information over a short period of time (Just & Carpenter, 1992) and is described as a “flexible mental workspace” (Alloway et al., 2006, p. 1698); cognitive load refers to the capacity for processing information imposed by working memory (Sweller, 2011). As learners manipulate and process information within working memory, the cognitive load bearing increases. Generally, if the learning environment has too many unnecessary components, this would create extraneous cognitive load, negatively affecting the capacity of a student to learn. Mercimek et al.’s (2020) findings showed multitasking actually impedes students’ learning. Other research indicated that engaging in multitasking while completing academic tasks had a negative impact on college GPA (Junco & Cotten, 2011), exam grades (Sana et al., 2012), and class grades (Demirbilek & Talan, 2017; Zhang, 2015), suggesting an impairment in cognitive processing. Wang and Tchernev (2012) also note that multitasking results in diminished cognitive performance as memory and cognitive load are taxed by competing stimuli, negatively affecting outcomes; therefore, encouraging multitasking is likely to be detrimental to students.  Learning Styles The notion that individuals have learning styles and that tailoring instruction to these modalities can enhance students’ learning is among the most persistent cognitive myths (Kirschner & van Merriënboer, 2013; Riener & Willingham, 2010; Torrijos-Muelas et al., 2021), one that can have detrimental effects on students (Scott, 2010). Decades after learning styles-based instruction found wide use in educational settings across a wide spectrum of grade levels and in many countries, exhaustive reviews suggest that nearly all available research evidence has indicated that learning styles do not exist and that adapting instruction to them has no educational benefits (Coffield et al., 2004; Cuevas, 2015, 2016a; Pashler et al., 2009; Rohrer & Pashler, 2012). Well-designed experiments testing the hypothesis have continually debunked the premise, concluding that teaching to learning styles does not enhance learning (Cuevas & Dawson, 2018; Rogowsky et al., 2015, 2020). The persistent myth of learning styles has been identified as substantial problem in education that impacts teacher training and the quality of instruction across k-12 and college classrooms (Cuevas, 2017) with some researchers recently exploring ways to dispel such detrimental neuromyths (Nancekivell et al., 2021; Rousseau, 2021). Digital Natives  Recent generations (e.g., “Millennials” and “Gen Z”) have been characterised as being inundated with technology since birth, and as a result, Prensky (2001) suggested that these learners were digital natives who may learn best when teachers integrate technology into instruction. However, researchers have concluded that these claims are not grounded in empirical evidence or informed by sound theoretical perspectives (Bennett et al., 2008; Jones et al., 2010). Margaryan et al. (2011) similarly found no evidence that this generation of learners had differing learning needs. Kirschner and De Bruyckere (2017) contend that the concept of digital natives is a myth as these learners often do not utilise technological tools effectively and the use of technology can actually adversely affect knowledge and skill acquisition. Wang et al. (2014) concluded that while students may know how to use technology fluidly within the context of entertainment or social media, they still need guidance from teachers to use technology to support learning. Both Smith et al. (2012) and Hite et al. (2017) came to the conclusion that modern students must be taught to use technology for learning purposes just as previous generations were, and its use does not come naturally in learning environments. Prensky (2012) later revised the concept of digital natives, acknowledging that the premise lacks empirical support. Ultimately, education research suggests that the concept of students being digital natives lacks merit. Pure Discovery Learning In their purest form, discovery learning and other similar methods of instruction such as inquiry learning and project-based instruction are designed to focus on maximum student involvement with minimal guidance from the instructor (Clark et al., 2012; Mayer, 2004). Despite the popularity of such approaches, decades of empirical research have shown that minimally guided instruction is not effective at enhancing student performance (Mayer, 2004) and is not consistent with the cognitive science on human learning (Sweller et al., 1998). Learning is defined by change occurring in long-term memory (Kirschner & Hendrick, 2020) which could encompass higher order processes and updates to episodic, semantic, and procedural memory (Cuevas, 2016b). But because students are tasked with navigating unfamiliar territory on their own during discovery-type learning, such minimally guided instruction places a heavy burden on working memory (Kirschner et al., 2006), leaving fewer cognitive resources available to contribute to encoding new information into long-term memory. Cognitive Load Theory suggests that effective instructional methods should decrease cognitive load and that approaches that instead tax working memory and increase cognitive load, as unguided methods do, result in subsequently less learning (Kirschner & Hendrick, 2020; Sweller, 2016).  Extrinsic Motivation  According to Deci et al. (1991) motivation within an education context entails student interest, capacities, and a sense of valuing learning and education. Motivational tendencies can be described as either intrinsic or extrinsic. Educational motivation can manifest in students completing tasks because of curiosity, awards, interest in a topic, approval from a parent or teacher, enjoyment in learning a new skill, or receiving a good grade (Ryan & Deci, 2000). Educators must attempt to foster students’ motivation which may result in extrinsic strategies, such as rewards to promote learning (Ryan & Deci, 2009). However, the use of extrinsic motivators can negatively affect students’ motivation if learning is contingent on rewards (Deci et al., 2001; National Research Council, 2018). Instead, teachers should focus on fostering intrinsic motivation by helping students develop academic goals and monitor learning progress while encouraging autonomy and choice (National Research Council, 2018). Gill et al. (2021) found a positive relationship between intrinsic motivational factors and the development of goals and well-being, suggesting that educators should focus on intrinsic motivation as a basis for learning. Vasconcellos et al. (2020) concluded that external motivators were negatively associated with adaptive outcomes and positively associated with maladaptive ones. Decades of research on extrinsic rewards suggest that they do not support learning or healthy motivational behaviours long term, and thus, intrinsic motivational factors should supplant them to promote learning. Established Learning Principles Retrieval Practice and the Testing Effect Research has clearly demonstrated that having students engage in retrieval practice, when they are tasked with attempting to retrieve learned information from memory, improves long-term retention (Roediger & Butler, 2011; Roediger & Karpicke, 2006; Rowland, 2014). Meta analyses indicate that the use of retrieval practice is more effective than re-studying for both simple and complex information (Karpicke & Grimaldi, 2012; McDaniel et al., 2011). Retrieval practice often takes the form of practice tests and quizzing, and even just a single retrieval session is sufficient to stimulate stronger retention of information than not engaging in testing at all. More than a century of empirical research on what is known as the testing effect has consistently indicated that the use of practice tests, either as a classroom activity or a form of studying, promotes increased learning and retention compared to more commonly used study strategies (Roediger & Karpicke, 2006). Meta analyses have found that retrieval practice tends to produce the strongest effects in mathematics but that it impacts learning across all content areas, and its positive effects are intensified when students are provided with feedback in response to the practice tests (Bangert-Drowns et al., 1991). Practice testing also produces substantial benefits in enhancing students’ free recall and long-term retention while reducing forgetfulness (Roediger & Karpicke, 2006). Dual Coding Decades of research have firmly established that pairing images with verbal or textual information assists learning and retention of information. This process is explained by Dual Coding Theory, a concept Paivio pioneered in 1969 and expanded upon in 1986. The theory asserts that humans have two separate cognitive systems for processing information, one verbal and one visual, and that when the two are combined there is an additive effect that allows for greater retention than would be possible with just one of the systems being incorporated (Clark & Paivio, 1991; Cuevas, 2016a; Kirschner & Hendrick, 2020). The two systems are interconnected but are functionally independent, an important feature because if the two systems did not function independently, cognitive overload would result, as it often does as a consequence of excessive stimuli. Instead, cognitive load remains low when images are used to supplement linguistic information and memory is enhanced due to there being two storage systems. Indeed, Cognitive Load Theory, later developed by Sweller (Kirschner & Hendrick, 2020; Sweller, 1988) relies heavily on Dual Coding Theory. Neurological research has provided evidence for the processes that allow for images to enhance memory (Di Virgilio & Clarke, 1997; Fiebach & Friederici, 2003; Welcome, et al., 2011). Additionally, a great deal of experimental research from the field of cognitive psychology has documented the benefits of dual coding on human learning and its potential for use in educational settings (Cuevas & Dawson, 2018; Hodes, 1998; Sadoski, et al. 1995; Sharps & Price, 1992; Wooten & Cuevas, 2024). Summarisation  The effectiveness of summarisation is linked to metacognition and self-regulation, two essential components of learning (Day, 1986; Leopold & Leutner, 2012). Kintsch and van Dijk (1978) and Brown and Day (1983) proposed important characteristics for summarising or condensing information, specifically, deleting irrelevant information, identifying pertinent information or specific supporting ideas, writing cohesive statements or topic sentences for each idea, and reconstructing ideas. These features constitute a frame for students to translate learned information into their own words (Westby et al., 2010). They are related to evidence-based strategies such as K-W-L charts (Carr & Ogle, 1987), drawing concepts (Leopold & Leutner, 2012), concept maps (Chang et al., 2002), and the use of headings to structure comprehension (Lorch & Pugzles Lorch, 1996). Summarisation strategies have been shown to be particularly helpful to students of varying age groups and ability levels in building comprehension (Hagaman et al., 2016; Shelton et al., 2020; Solis et al. 2011; Stevens et al., 2019), a skill that is vital for student success (Bogaerds-Hazenberg et al., 2020; Perin et al., 2017). Ultimately, decades of research indicate that summarisation assists students in processing information into a condensed structure and is an effective strategy in supporting reading comprehension and developing content knowledge for students. Direct Instruction Teacher-centred strategies such as direct instruction have tended to lose support in pedagogical circles as student-centred forms such as discovery learning have become more popular (Clark, et al. 2012). Yet direct instruction has long been established as being among the most effective forms of instruction (Kirschner & Hendrick, 2020). The method is comprised of well-supported strategies such as the teacher activating background knowledge and fostering introductory focus to start lessons, the modelling of skills and processes for students, providing well-structured explanations which include chunking of information into manageable portions, and guiding independent practice after students have had sufficient support and have become familiarised with the concepts. Each of these are aspects of successful scaffolding, and each is strongly supported by research in cognitive science and educational psychology (Rosenshine, 2012). The evidence for the effectiveness of these different features comprising direct instruction is so thorough that throughout the American Psychological Association’s top 20 principles of teaching and learning (2015) there are sections dedicated to specific components of direct instruction. Additionally, two comprehensive meta-analyses captured the extent of research support for the method. One found consistently significant and positive effects on student learning across 400 studies over 50 years covering all subject areas (Stockard et al. 2018). Another compared the effects of student-centred approaches and direct instruction based on studies across a ten-year period and concluded that the positive effects of direct instruction employing full teacher guidance were far greater than student-driven approaches (Furtak et al. 2012). Spacing Spacing, or distributed learning, occurs when an instructor or student intentionally inserts time intervals between learning sessions based on the same content. Introducing time intervals between study sessions results in stronger retention than massing practice and limits forgetting (Cepeda et al., 2009; Latimier et al., 2021). Research has shown distributed learning to be effective across many different domains, populations, age groups, and development levels, in each case resulting in substantial improvements in long-term retention (Carpenter et al., 2012; Larsen, 2018; Seabrook et al., 2005). Kirschner and Hendrick (2020) argue that distributed practice is among the most well-established procedures for enhancing learning. In one large, well-designed study, Cepeda et al. (2008), concluded that optimal retention occurred when learners studied information on multiple occasions with gaps between different study periods and tests administered at different time intervals. Dunlosky et al. (2013) noted that while spacing is not consistently or intentionally practiced in formal learning environments, the practice has high utility value due to its consistent benefits to students across such a wide range of variables. Rohrer and Pashler (2010) contend that while there is extensive research evidence supporting the effectiveness of using spacing as an instructional strategy, unfortunately, relatively little attention is devoted to its use in practice. Current Study The principal purpose of this study was to assess the pedagogical knowledge of faculty at a large state university in the U.S. with a teaching emphasis, specifically their knowledge of practices for which there is abundant research evidence for or against. Faculty at research universities primarily focus on research output, whereas faculty at teaching universities devote the majority of their time and effort to delivering instruction. Thus, it seemed logical to assess faculty’s knowledge of teaching at a university where teaching, and therefore pedagogy, is the greater emphasis. Additionally, because the field of education is predominantly concerned with pedagogy, ostensibly, education professors would be expected to show the strongest understanding of these concepts, though faculty from all departments within the university were assessed.  A secondary purpose of the study was to gauge professors’ metacognitive awareness by ascertaining their confidence levels in their pedagogical knowledge and whether their self-assessments aligned with their actual level of knowledge. The implication is that if faculty showed high confidence but low knowledge and therefore low levels of metacognition, they would be unaware of their misconceptions. As a result, such faculty would be unlikely to seek out professional development opportunities or investigate approaches that may ultimately improve their instruction. If, on the other hand, faculty showed stronger metacognition, with low confidence and also low knowledge, they would likely be more willing to engage with sources to improve the delivery of their content because they would be aware of their limitations in that regard. Finally, if they showed strong metacognition with high confidence and high levels of knowledge, this would be the ideal scenario and should result in favourable learning outcomes as long as the faculty also had sufficient understanding of content and socio-cultural awareness.    The study was guided by the following research questions: Do faculty members show a strong understanding of well-established concepts regarding learning as established by cognitive science? Which learning concepts do faculty show the most misunderstanding of (i.e., which myths do they tend to endorse or which established concepts do they tend to reject)? Are there differences in the level of knowledge of learning practices between faculty members from different disciplines? For instance, do faculty from education score significantly higher than faculty in other areas in this regard? Are there faculty from certain content areas who show a prevalence for believing in myths or for not being aware of established learning principles? Are there differences in the level of knowledge of learning practices between faculty members according to rank, i.e., university experience? Do faculty show metacognitive awareness of their level of knowledge of teaching and learning practices?   3. Methodology Contextual Factors and Participants  Data were collected from faculty at a midsized public university comprised of five campuses in the southeastern United States with a total enrolment of approximately 18,000 students and 980 faculty at the time of data collection. The student-to-faculty ratio is calculated to be 18:1, and 74% of faculty are full-time, indicating a low proportion of adjunct faculty. According to the Carnegie Classification system, the university is classified under “Master's Colleges & Universities: Larger Programs”. The vast majority of the students enrolled and the degrees the university confers are at the undergraduate level, but the institution also offers master’s and doctoral degrees.  The institution contains a traditional composition of colleges for a state university: Arts and Letters, Business, Education, Health Sciences and Professions, Science and Mathematics, and University College (interdisciplinary studies). The participants were comprised of full-time faculty (N = 107) from each college at the university. The breakdown of responses by college were approximately proportional to the relative size of each college (n = 37, n = 7, n = 14, n = 6, n = 31, n = 4, respectively, with eight declining to identify their college). Respondents were evenly distributed according to rank: Professors (n = 26), Associate Professors (n = 29), Assistant Professors (n = 28), Lecturers/Instructors (n = 22), with 79% being tenured or tenure track and two declining to specify. Of all faculty responding to the broader survey (N = 186), 54.3% identified as women, 39.8% identified as men, 2.2% identified as non-binary or non-conforming, and 3.7% chose not to answer. Data on age were not available.  Design The study used a non-experimental, cross-sectional design that primarily relied on group comparison for the essential analyses. Data were analysed quantitatively using descriptive statistics, analysis of variance, and Pearson correlation. The study did not include an intervention, and data were collected during a single time point. Though data were collected via a survey instrument, an objective assessment of knowledge was used as the primary measure instead of dispositional constructs that would more commonly be the focus of a survey.  Instrument The Diverse Learning Environments (DLE) survey was distributed by the Higher Education Research Institute (HERI) electronically to students and faculty across all five campuses. The DLE is administered by the UCLA HERI to colleges and universities across the United States (HERI, 2015) and was designed to assess stakeholders’ perceptions on constructs such as institutional climate, satisfaction with the academic and work environment, institutional diversity, individual social identity, intergroup relations, and discrimination. A more detailed description of the DLE and related variables can be found in Dawson and Cuevas, 2019. For this study we analysed the variables related to the faculty portion of the survey. The faculty section of the DLE used for this study was comprised of 95 items, including questions about demographic variables, rank and tenure status, course information, involvement in activities such as research and scholarship, instructional strategies, technology, DEI, COVID-19, satisfaction with workplace environment, and salary. The majority of the items utilised a Likert scale, with some Yes/No response items and accompanying follow-up questions. The final 35 items were comprised of “local optional questions” which were added to the survey by the institution beyond the original DLE items in order to address specific questions of interest. The items used for this study were included in the local optional questions and were grouped into two constructs: pedagogical knowledge items or confidence in pedagogical knowledge. The design, selection, and scoring of the items are discussed below. Pedagogical Knowledge Items To assess pedagogical knowledge, a 10-item scale was created. The scale consisted of five common myths and misconceptions about learning and five learning strategies that have been well-established by research. The five myths entailed the following concepts: learning styles, discovery learning, the efficacy of fostering extrinsic motivation, the efficacy of multitasking, and the existence of digital natives. The five well-established learning concepts consisted of the following: dual coding, summarisation, practice testing, direct instruction, and spacing. The rationale was that pedagogical knowledge could be assessed through two general propositions - how many misconceptions an educator held about learning and how many effective approaches they were aware of. The items were limited by two main factors: 1) Due to the length of the overall survey we were only able to insert a small number of additional items because of concerns over time restraints and the respondents’ likelihood of answering a lengthy questionnaire, and 2) there needed to have been clear and well-established research evidence for or against each item with each concept as close to being “settled” science as possible. Concerning this second factor, there are still many learning concepts under scrutiny or that may show efficacy in some circumstances but not others, and if this was the case, then we could not consider them. To ensure that the format of the items was consistent with the rest of the survey items on the DLE, they were presented on a 4-point Likert scale, from “Strongly Disagree” to “Strongly Agree”. However, the responses were scored dichotomously as either correct or incorrect. For instance, for the effective learning approaches, if respondents agreed that they were effective by answering either “agree” or “strongly agree”, they were credited with answering the item correctly. If they answered “disagree” or “strongly disagree” they were credited with an incorrect answer. Alternately, for the myths and misconceptions, if respondents agreed or strongly agreed that they were effective, then that was treated as an incorrect answer, whereas if they disagreed or strongly disagreed that they were effective, then this was scored as a correct answer. The scale can be found in Appendix A.  Confidence in Pedagogical Knowledge Scale  In psychological and educational research, metacognition or metacognitive awareness has traditionally been measured by gauging one’s confidence in their knowledge, skills, or expertise on a topic via a survey and then assessing them through an objective test to ascertain whether the two are positively correlated (Anderson & Thiede, 2008; Dunlosky et al., 2005; Thiede et al., 2003). If the individual’s self-assessment positively correlates with their actual knowledge level, then this would indicate strong metacognition. For instance, if the person rated their own knowledge as high and they scored high on the objective assessment, they would have shown good metacognitive awareness. Likewise, if they rated their own knowledge as low and scored low on the assessment, this would also show good metacognitive awareness because the individual would have provided an accurate self-assessment and would be aware that they were not knowledgeable of the subject. In contrast, an individual would show poor metacognition if there was a negative correlation or no correlation between their self-assessment and the objective assessment. It could be that the person assessed themselves as having low levels of knowledge but actually scored high on the assessment, which may be the case if the individual suffered from anxiety or was simply unsure of their own ability. In this example the individual underestimated their knowledge or ability. But the more common form of a lack of metacognitive awareness would be when a negative correlation between the two measures occurred due to the individual overrating their own knowledge on the self-assessment but scoring poorly on the objective assessment on the topic. In essence, they would have believed themselves to be highly knowledgeable while in reality their knowledge level was low. They did not recognise their own lack of knowledge due to having a limited understanding of the field. This is a very common research finding known as the Dunning-Kruger effect (Kruger & Dunning, 1999; Pennycook et al., 2017) wherein a person overestimates their own competence and is unaware that they lack sufficient knowledge of a subject. When individuals contend that they have high levels of knowledge that they actually do not, it is known as overclaiming (Atir, et al., 2015; Pennycook & Rand, 2019). To assess metacognitive awareness regarding faculty’s knowledge of pedagogical practices and human learning, we developed a five-item confidence scale. These items asked about familiarity with research on learning and the influence this has on their instruction, confidence in their knowledge of best practices, confidence in their use of best practices, and their familiarity with pedagogical practices compared to their peers. Again, these items were presented on a 5-point Likert scale. In this case, a composite score was derived for each faculty member that represented a self-assessment of their familiarity and knowledge of best practices in regard to student learning. This score was then used to conduct a Pearson correlation between confidence level in pedagogical knowledge and actual pedagogical knowledge as determined by the scores on the pedagogical knowledge items to calculate the level of metacognitive awareness of faculty in regard to learning concepts. When tested for reliability, the Cronbach’s alpha coefficient for this scale was .79. The scale can be found in Appendix B.   4. Results Descriptive Statistics for Pedagogical Knowledge by College and Concept In order to address the first research question pertaining to the faculty members’ overall understanding of the 10 pedagogical concepts, a mean score was tabulated for all respondents (N = 107). Individual scores could range from 0, if the respondent answered all questions incorrectly, to 10, if all questions were answered correctly. A correct answer represented agreement that well-established research-based pedagogical concepts were effective or when the respondent disagreed that the myths or misconceptions were effective learning practices. Values were reverse-scored for negatively coded items. The mean score across all faculty on the pedagogical knowledge items was slightly above the midpoint of the scale (M = 6.11). This result does not indicate strong pedagogical knowledge and reveals that faculty answered fewer than 65% of the items correctly on average. Below in Table 1, the descriptive statistics with the mean scores for the pedagogical items of faculty are organised by the participants’ affiliated college. College N Mean Std. Dev Std. Error Arts and Letters 37 6.38 1.77 0.29 Business 7 6.00 1.00 0.38 Education 14 5.93 2.06 0.55 Health Sciences  6 6.33 0.82 0.33 Sciences & Math 31 6.00 1.73 0.31 University College 4 6.00 1.83 0.91 Unidentified 8 5.63 1.30 1.30 Total 107 6.11 1.67 0.16 Range of possible scores: 0 – 10 Table 1. Pedagogical Knowledge Scores by College   The second question addressed the respondents' knowledge of effective learning practices in terms of which learning concepts they demonstrated the most and least understanding of. Across faculty from all colleges, the myth or misconception items that instructors scored most poorly on were learning styles (33% correct) and discovery learning (33% correct), with two-thirds of respondents indicating they believed those practices to benefit learning. Additionally, less than half the faculty answered the questions correctly on multitasking (40.6% correct) and digital natives (43.8% correct). While faculty were not effective at identifying misconceptions related to pedagogy, they were more accurate in identifying effective approaches with nearly all faculty reporting that direct instruction (97.2% correct) and practice testing are beneficial to students (94.4% correct). Furthermore, faculty recognised the importance of spacing (85.7% correct) and, to a lesser extent, summarisation (66.7% correct) and dual coding (65.4% correct). The full breakdown of correct responses by learning concept across all faculty and according to college can be found in Table 2 below. Of note, data in Table 1 are based on means calculated for each college even if a faculty member did not complete all the items, but data in Table 2 required the faculty member to complete all subscales. Because the College of Arts and Letters and the College of Science and Math had several faculties who did not complete each item, the sample size in Table 2 was reduced by 4. Faculty from the College of Education, who should show the most expertise regarding pedagogy, only scored higher than faculty in other colleges regarding myths and misconceptions about one concept. They were more likely to recognise that unguided discovery learning is ineffective (57.1% correct). College of Education faculty scored similarly to faculty from other colleges on all the other concepts, both for the myths and misconceptions and the effective practices. Additionally, College of Education faculty were split regarding the effectiveness of summarisation for students’ learning (50% correct). Faculty across colleges were largely able to identify effective practices, such as retrieval and direct instruction, but scored somewhat lower regarding dual coding and summarisation. Learning Concept All Faculty (N =103) Arts & Letters (N = 36) College of Business (N = 7) Education (N = 14) Health Science (N = 6) Science and Math (N = 29) University College (N = 4) Learning Styles (M/M) 33.0 44.4 14.3 28.6 33.3 31.0 25.0 Discovery Learning (M/M) 33.0 27.8 28.6 57.1 33.3 25.8 50.0 Extrinsic Motivation (M/M) 60.2 66.7 42.9 64.3 50.0 55.2 50.0 Multitasking (M/M) 40.6 40.5 28.6 35.7 50.0 51.6 0.00 Digital Natives (M/M) 43.8 55.6 14.3 35.7 66.7 40.0 50.0 Dual Coding (EP) 65.4 73.0 71.4 64.3 50.0 58.1 50.0 Summarisation (EP) 66.7 64.9 100 50.0 66.7 69.0 100.0 Retrieval Practice (EP) 94.4 89.2 100.0 92.9 100.0 96.8 100.0 Direct Instruction (EP) 97.2 97.3 100.0 92.9 100.0 96.7 100.0 Spacing (EP) 85.7 83.8 100.0 71.4 83.3 96.6 75.0 *(M/M) = myth/misconception *(EP) = effective practice Table 2. Correct Responses on Each Learning Concept Across All Faculty and By College   Pedagogical Knowledge by Academic Discipline/College For the third research question, we sought to answer whether there were differences in pedagogical knowledge regarding the 10 concepts according to academic field. In particular, we were interested in determining whether education faculty showed stronger pedagogical knowledge compared to faculty from other fields since pedagogy is the core content of education professors. An examination of the descriptive statistics for the mean scores by college (see Tables 1 and 2 above) revealed the mean scores of faculty from the various colleges were between 5.63 and 6.38 out of a possible 10. There was no college where faculty averaged more than 65% correct answers. Additionally, only the subset of faculty who chose not to identify their college scored lower in pedagogical knowledge than those from the College of Education. Faculty from all other colleges outperformed education faculty regarding their level of pedagogical knowledge.  To ascertain whether there were statistical differences in pedagogical knowledge according to academic field, faculty were grouped by college (e.g., Arts and Letters, Business, Education, etc.). ANOVA analyses were conducted with scores from the pedagogical knowledge scale as the dependent variable. For all ANOVA analyses, equal variances across groups were assumed, as the assumption of homogeneity of variance was not violated in any of the ANOVA models. Results found no statistically significant differences in pedagogical knowledge between faculty from different colleges overall, F (5, 93) = 0.253 p = .937, η2 = .013. Thus, while an examination of the descriptive statistics revealed that education faculty did not outperform faculty from other areas, inferential analysis indicated that all faculty from across the university scored similarly regarding pedagogical knowledge across all items.  To extend the analyses, we sought to determine whether there were differences in pedagogical knowledge according to academic field regarding either the myths and misconceptions or the effective practices. ANOVA analyses revealed that there were no differences by college in either the myths/misconceptions, F (5, 93) = 1.428, p = .221,  η2 = .071 or the effective practices, F (5, 93) = 1.700, p = .142, η2 =.084. Therefore, faculty from all colleges demonstrated similar levels of knowledge of both myths/misconceptions and effective practices. These results were surprising because faculty from the College of Education would be expected to score higher in pedagogical knowledge for both myths/misconceptions and effective practices since pedagogical knowledge is central to their field. Yet this was not the case. There would be no reason to expect faculty from the other colleges to perform better or worse than any other college since pedagogical knowledge generally does not fall directly within their field of study or expertise. Pedagogical Knowledge by Rank Also of interest was whether faculty performed differently regarding pedagogical knowledge according to academic rank, which may be considered a reflection of teaching experience. Respondents were grouped by rank (i.e., instructor, lecturer, assistant professor, associate professor, professor), and ANOVAs were conducted with scores on overall pedagogical knowledge, myths/misconceptions, and effective practices as dependent variables. Only full-time faculty were included in the sample. Significant main effects by rank were revealed for overall pedagogical knowledge, F (4,100) = 3.020, p = .021, η2 = .108, and for the myths/misconceptions, F (4, 100) = 2.836, p = .028, η2 = .102, but not for the effective practices, F (4, 100) = 1.455, p = .222, η2 = .055.  For overall pedagogical knowledge, LSD post hoc analyses identified that professors (p = .008), associate professors (p = .019), and lecturers (p = .044) scored significantly higher than assistant professors. Furthermore, professors scored significantly higher than lecturers (p = .038). The LSD post hoc analyses for myths/misconceptions revealed that professors were better able to correctly identify myths significantly more than assistant professors (p = .008) and instructors (p = .015).  Full and associate professors outperformed less experienced professors, potentially due to having more expertise and more years of experience in instruction on average. Descriptive statistics regarding rank may be found below in Tables 3, 4, and 5. Note that in Table 3 the possible score was between 0 and 10, whereas in Tables 4 and 5, the possible score was between 0 and 5. For myths and misconceptions, the scores for faculty at all ranks fell below the 50% mark (M = 2.5) except for professors, who scored just above that (M = 2.55). College N Mean Std. Dev Std. Error Full Professors 26 6.58 1.27 0.25 Associate Profs 29 6.41 2.13 0.40 Assistant Profs 28 5.39 1.47 0.28 Lecturers 18 6.39 1.42 0.33 Instructors 4 4.75 0.50 0.25 Total 105 6.11 1.68 0.16 Table 3. Pedagogical Knowledge scores by Rank Overall   College N Mean Std. Dev Std. Error Full Professors 26 2.55 0.50 0.10 Associate Profs 29 2.43 0.41 0.08 Assistant Profs 28 2.25 0.38 0.07 Lecturers 18 2.44 0.39 0.09 Instructors 4 2.00 0.16 0.08 Total 105 2.40 0.43 0.04 Table 4. Pedagogical Knowledge Scores by Rank for Myths and Misconceptions   College N Mean Std. Dev Std. Error Full Professors 26 3.01 0.32 0.06 Associate Profs 29 2.89 0.22 0.04 Assistant Profs 28 2.89 0.29 0.05 Lecturers 18 3.00 026 0.06 Instructors 4 3.10 0.38 0.19 Total 105 2.95 0.28 0.03 Table 5. Pedagogical Knowledge Scores by Rank for Effective Practices   Metacognitive Awareness of Pedagogical Knowledge Of particular interest was whether faculty demonstrated metacognitive awareness of their own levels of pedagogical knowledge. As noted in the method section above, metacognition, or metacognitive awareness, has typically been measured by assessing respondents’ level of confidence in their own knowledge or ability, then assessing them on an objective test of that knowledge or ability and conducting a correlational analysis to determine whether their self-beliefs correspond with actual knowledge or performances levels on the constructs. For this analysis, we conducted Pearson correlations utilising the faculty’s scores on the Confidence in Pedagogical Knowledge Scale and their scores on the Pedagogical Knowledge Items assessment to test for a relationship between the two.  When measuring the metacognitive awareness of pedagogy for all faculty who participated, a Pearson correlation revealed a weak non-significant negative correlation, r (107) = -.157, p = .105. Accurate metacognitive awareness is only shown when there is a significant positive correlation between one’s confidence or views of their own knowledge or ability and one’s actual levels in those areas. A negative correlation or no correlation (non-significant) would indicate poor metacognition in that the respondents’ views of their own knowledge or abilities do not correspond with their actual levels in the areas. That was the case across all faculty in the study. In regard to faculty in the College of Education, who we hypothesised would have greater pedagogical knowledge and awareness of their own levels of expertise in the area, we again found no correlation between their self-reported level of expertise and their actual level of pedagogical knowledge, r (14) = .003, p = .992. We also tested for differences in metacognitive awareness between faculty based on academic rank; however, no significant differences were found on this variable, with more experienced faculty showing no greater awareness in this regard than newer faculty.  Overall, faculty reported high confidence that their teaching was heavily influenced by their familiarity with effective teaching practices (81.2% agreement) and did not believe that other faculty members had better knowledge of those practices (87.7% agreement). While faculty did show recognition of effective practices, they also endorsed myths and misconceptions regarding pedagogy, wherein they scored the poorest.   5. Discussion The goals of this study were to assess pedagogical knowledge, as a function of both effective practices and misconceptions, of faculty at a large teaching-centred state university and to gauge their metacognitive awareness regarding their own instructional practices. In addition, we sought to discern whether these outcomes varied according to academic discipline, as represented by college, and by academic rank, which often can serve as a proxy for experience in higher education.  The data present a picture of faculty who tended to characterise all the pedagogical approaches they were presented with as being effective, regardless of whether those approaches were myths or misconceptions or were actually effective strategies. This resulted in a dynamic in which faculty correctly classified effective practices as being beneficial to learning but also incorrectly endorsed myths and misconceptions. This suggests that faculty at the university in this study are often incorrect in their assumptions regarding ineffective practices and mistakenly believe practices debunked by research benefit students. This finding is consistent with recent research indicating that educators continue to report holding beliefs in neuromyths despite a wealth of evidence to the contrary (Macdonald et al., 2017; Newton & Salvi, 2020; Rousseau, 2021).  Faculty answered the majority of the pedagogical items correctly, with most incorrect answers coming in the form of endorsing myths and misconceptions as effective practices. For example, two-thirds of faculty believed unguided discovery learning and learning styles to be beneficial to student learning, while more than half were incorrect about multitasking and digital natives. The only misconception that the majority of faculty correctly characterised was the use of extrinsic motivators, with most classifying rewards systems as ineffective long-term instructional strategies. For the effective practices, the vast majority of faculty correctly recognised the efficacy of direct instruction, retrieval practice, and spacing, with a smaller majority recognising the effectiveness of summarisation and dual coding. While faculty demonstrated an understanding that certain research-supported strategies are effective, many also believed several of the myths and misconceptions are beneficial. This may indicate that instructors are not successfully differentiating between effective and ineffective practices, a finding consistent with prior research (Rohrer & Pashler, 2010), suggesting that faculty may often unknowingly choose ineffective methods. If this is the case, then it is likely that many unproductive teaching methods may be used in college classrooms as well at the public-school level, where similar findings have emerged (Lethaby & Harries, 2016). The results were somewhat less clear regarding the pedagogical knowledge of faculty according to rank. We did not have specific expectations about how faculty at different ranks would perform. It was possible that higher ranking members may show stronger pedagogical knowledge due to having more experience in instruction, although considering that some lower ranked faculty may have been employed at other universities previously or held multiple postdoc positions prior, higher ranked faculty may not universally have been more experienced. Another possibility would be that newer faculty would be more familiar with learning science than longer tenured ones because they had more recently been exposed to the latest research when completing their doctorates. The former scenario appeared more likely to be the case than the latter, with full professors and associate professors outperforming assistant professors in overall pedagogical knowledge. While faculty of all ranks performed similarly regarding identifying effective instructional practices, full professors were less likely to endorse myths and misconceptions than assistant professors and lecturers. Thus, faculty across ranks appeared to hold similar views regarding effective practices. Still, the more newly hired or less experienced faculty were more likely to endorse myths and misconceptions than tenured faculty. One possibility based on these results is that newly minted Ph.D. holders are not being exposed to accurate learning science in their doctoral programmes and that myths and misconceptions are proliferating at that level. The most notable contributions of this study emerged through analyses resulting in non-significant findings. Surprisingly, College of Education faculty, whose academic discipline is entirely rooted in pedagogy, did not demonstrate better understanding of these research-based, well-established instructional concepts than faculty from other disciplines. With the exception of unguided discovery learning, education faculty were just as likely to endorse myths and misconceptions about learning and no more likely to recognise practices supported by well-established research. In fact, while the difference was not statistically significant, education faculty scored lower in pedagogical knowledge than the faculty from each of the other colleges.  Additionally, faculty across the university showed a lack of metacognitive awareness in regard to their pedagogical knowledge. The absence of a positive correlation between faculty members’ confidence in their knowledge of teaching and learning practices and their actual knowledge of those practices revealed a limited understanding of their own knowledge in the area. In short, being confident in their knowledge of pedagogy was not related to actual high levels of knowledge on the topic. This dynamic was true of faculty from the College of Education as well. Individuals with low levels of knowledge yet high levels of confidence in that knowledge are unlikely to change their views and seek ways to improve their knowledge or performance. In this case, such faculty would be unlikely to improve upon or learn about new instructional techniques over time.  The dichotomy between actual knowledge and confidence in one’s own knowledge was underscored by the preponderance of faculty, nearly 88%, who believed that others at the university did not know more about pedagogical approaches than themselves. In one respect, this demonstrates high self-efficacy in their teaching practices, but it may also be cause for concern. Considering that the vast majority of faculty participants were trained in and taught in fields in which learning science was not central to their discipline, a belief that there were no others at the university with more expertise in pedagogy could lead to circumstances when faculty are disinterested in pursuing more effective methods or learning about emerging research on teaching practices. This situation may mirror that of K-12 education when public school teachers may receive limited instruction in learning science and ultimately default to relying on anecdotal experiences to guide their practice.  This particular dynamic, high levels of confidence paired with low levels of knowledge, represents the well-established Dunning-Kruger effect (Kruger & Dunning, 1999). The Dunning-Kruger effect most commonly appears in respondents with the lowest levels of knowledge on a subject when those with little expertise overclaim and believe their knowledge to be high in a field they have limited familiarity with (Atir, et al., 2015; Pennycook, et al., 2017). One novel contribution of the present study is that the participants cannot be viewed as having the lowest levels of expertise yet still demonstrated what can be consider the Dunning-Kruger effect because confidence far exceeded actual knowledge levels. Faculty almost universally held terminal degrees in an academic field, most likely from national research universities. Therefore, this suggests that the effect does not apply only to those with the lowest levels of knowledge or those whose knowledge is outside their field. In this case, all participants had relatively high levels of experience and some knowledge of the learning sciences. However, it appears that learning science on pedagogy is specialised enough that even those with extensive experience teaching may have limited levels of knowledge of it while at the same time having confidence with their familiarity with the subject. It is important to note that there is no reason to conclude that the issues revealed here are unique or specific to the university that served as the focus of the study, as the participants were not educated at this institution. The vast majority of faculty at the university received their Ph.Ds from what are considered to be high-level research universities, categorised under the Carnegie classification system as “Doctoral Universities: Very High Research Activity”. Thus, it is likely that their cohort members educated at the same universities who secured positions teaching at other institutions would hold similar views. Considering this situation, these results may indicate much wider issues in the preparation of university faculty for teaching purposes. Limitations One limitation of the present study was that the assessment designed to measure knowledge of pedagogical practices was restricted to ten items. There were two primary reasons for this. First, as an extension of a much larger survey instrument, we were limited in the number of items we were able to introduce. It can certainly be argued that ten items are not enough to capture a range of pedagogical practices, without enough breadth to ascertain the full scope of possibilities, though using a mixture of effective practices and myths/misconceptions allowed for more nuanced analyses of pedagogical knowledge.  The second reason for the limited number of pedagogical items was that it was somewhat challenging to identify approaches for which there was an abundance of research evidence to the extent that we could consider the science to be settled and which were not confined to certain developmental levels or content areas. For instance, a practice such as interleaving is well-supported across age ranges, but research has most commonly linked it to math instruction (Rohrer & Pashler, 2010), and it is less clear how it may apply to language arts or history instruction. Likewise, practices like the use of manipulatives have been shown to be effective, but mostly for young children and in somewhat narrow contexts (Willingham, 2017). For these reasons, the measure of pedagogical knowledge was limited to a short list of practices.  Another potential limitation was the question of how settled is settled? Unlike the hard sciences, research in social sciences rarely approaches that level of consensus and there may be continued debate and contradictory findings for decades. Therefore, we chose concepts for which we determined there was the greatest amount of consensus among researchers and the greatest abundance of robust empirical evidence. We chose to draw upon concepts such as those compiled by organisations like the American Psychological Association (APA, 2015) or a compilation of seminal studies in educational psychology (Kirschner & Hendrick, 2020), each of which is supported by hundreds of studies. Therefore, despite academics who may dispute our choices of concepts, we are confident in the validity and reliability of the ones we chose to include, though the criteria limited our options. Additionally, the reliability of the Confidence in Pedagogical Knowledge Scale was not as strong as we would have liked (α = .79). Reliability was further tested by removing each of the five items to ascertain whether the scale proved to be more reliable in any combination as a 4-item scale, but the highest reliability was achieved by including all five items. Future researchers could improve upon this scale by introducing stronger items or replacing ones from the current scale. Nonetheless, while ideally the reliability of the present scale would have been stronger, it was acceptable. A final limitation was that the data were collected from just one large state university and the sample was restricted to 107 faculty members. But as noted above, these faculty members were trained at a wide range of different universities, the majority of them national research universities, so we do not view the results as being limited to a reflection of the one institution that was the focus of the study. However, we recommend that future researchers extend their samples to several institutions at a variety of levels, such as community colleges, comprehensive, and research universities, as well as teachers in K-12 settings. Implications In sum, the data suggest a situation in which faculty do not have a strong understanding of which pedagogical approaches are effective in contrast to those which are not according to learning science, yet they feel relatively confident that they do. Faculty were able to correctly identify effective practices but could not distinguish those from myths and misconceptions, and the widespread endorsement of certain misconceptions such as learning styles and discovery learning indicate that perhaps faculty may not be employing the most efficacious teaching methods. The limited levels of metacognitive awareness further suggests that such faculty members may be unlikely to search out better methods if they are unaware that a variety of concepts they endorse do not benefit learning. It is understandable that those faculty members from the colleges of Arts and Letters, Business, Math, and Sciences would not be as aware of the cognitive science supporting different learning strategies, as their doctoral preparation was focused on promoting the highest levels of knowledge and research connected to their specific discipline. However, the issue is more complex for those in the College of Education. The remedies for faculty from colleges besides Education are straightforward. University programmes that prepare Ph.D. candidates to be college faculty can ensure that they include courses structured to familiarise future faculty with recent research on human learning and how cognitive science informs teaching practices in real-world applications. Additionally, most universities have centres for teaching and learning designed to serve as professional development for faculty, particularly in the area of pedagogy. These centres should endeavour to provide the most recent and accurate information and avoid endorsing concepts shown by learning science to be misconceptions. These two options are often instituted at many universities. Our results suggest that they must be done better. The issues regarding remedies for colleges of education are more daunting. Pedagogy is the central content of these programmes. These colleges of education prepare K-12 teachers, K-12 administrators, district and state-level administrators, and very often university administrators who receive their doctorates in leadership programmes housed in colleges of education. If education professors are not aware of the most well-established teaching and learning methods, then their students will also not become aware of them, including K-12 teachers and administrators at each level. And because nearly all of the education faculty at the university in this study received their Ph.D.s from a range of R-1 institutions, it suggests that the issue may be widespread. State and national accrediting bodies may not sufficiently account for professors’ level of pedagogical knowledge, which may partially explain why misinformation about learning such as learning styles is included in teacher licensure materials in the majority of states (Furey, 2020). Questions have arisen about the efficacy of training provided by colleges of education (Asher, 2018; Goldhaber, 2019), and our results appear to underscore those concerns. These results, if they were replicated in further studies, should compel those in academia to rethink and perhaps overhaul the foundation of colleges of education, in addition to making substantial alterations to accrediting bodies for colleges of education and state boards of education in the U.S. These organisations may not be meeting their primary responsibility of ensuring that their graduates adequately understand teaching and learning practices. It should not be sufficient to simply train teachers to function within a system. They should be taught about what works to enhance student learning in order to become better teachers, but that is not possible if those teaching them at the university level do not have a full understanding of fundamental principles of learning. A first step in the process would be for universities and colleges to acknowledge the potential limitation and modify curricula to ensure that adequate attention was devoted to coursework on how human learning occurs. This would necessitate employing personnel with the necessary expertise to teach such courses. This may entail universities hiring instructors with specific skill sets that may not be prioritised at the moment in order to advance that particular knowledgebase. For existing faculty who may not have had the benefit of coursework in the learning sciences themselves, professional development can be offered. This may consist of learning modules where faculty read and discuss research pertaining to learning outcomes and instructional strategies from fields such as cognitive psychology, neuroscience, educational psychology, and cognitive science. Ideally, these would incorporate the very methods being studied and provide videos and demonstrations of the strategies being used in classroom settings. This is a general overview of initial steps that may be taken, but a key point is that this training should ultimately be introduced to faculty, to those in graduate courses who are likely to one day be faculty or involved in higher education, and to those who are preparing for roles in K-12 education.   6. Conclusion This study, along with a growing body of research, suggests that instructors are not currently being adequately trained in the cognitive science that informs teaching and learning. If myths and misconceptions about learning continue, those instructors will be unlikely to optimise student learning. By acquiring a more comprehensive understanding of learning science, university instructors will have the opportunity to employ teaching practices that have been shown to enhance cognition, such as methods to increase retention of information, problem-solving skills, or procedural knowledge. There is little doubt that their students would benefit from the use of research-based practices. This is especially true of faculty in colleges of education who prepare K-12 teachers and administrators at a variety of levels because an understanding of such approaches could then be transferred to those who would put them to use in classrooms with younger learners. We recommend that universities prioritise an emphasis on learning science to ensure that the candidates they train for teaching positions are aware of effective teaching and learning practices and can distinguish between those and ineffective ones, which should ultimately enhance educational outcomes for all involved.   About the Authors Joshua A. Cuevas ORCID ID: 0000-0003-3237-6670 University of North Georgia [email protected] Bryan L. Dawson University of North Georgia [email protected] Gina Childers Texas Tech University [email protected]   References Alloway, T. P., Gathercole, S. E., & Pickering, S. J. (2006). Verbal and visuospatial short-term and working memory in children: Are they separable? Child Development, 77(6), pp. 1698–1716. https://doi.org/10.1111/j.1467-8624.2006.00968.x. American Psychological Association, Coalition for Psychology in Schools and Education. (2015). Top 20 principles from psychology for preK–12 teaching and learning. Retrieved from www.apa.org/ed/schools/cpse/top-twenty-principles.pdf. Anderson, M. C., & Thiede, K. W. (2008). Why do delayed summaries improve metacomprehension accuracy? Acta Psychologica, 128, pp. 110-118. https://doi.org/10.1016/j.actpsy.2007.10.006. Asher, L. (2018, April 8). How ed schools became a menace: They trained an army of bureaucrats who are pushing the academy toward ideological fundamentalism. The Chronicle of Higher Education. https://www.chronicle.com/article/How-Ed-Schools-Became-a-Menace/243062. Atir, S., Rosenweig, E., and Dunning, D. (2015). When knowledge knows no bounds: Self-perceived expertise predicts claims of impossible knowledge. Psychological Science, 26(8), pp. 1295 – 1303. https://doi.org/10.1177/0956797615588195. Bangert–Drowns, R. L., Kulik, J. A., & Kulik, C. L. C. (1991). Effects of frequent classroom testing. Journal of Educational Research, 85, pp. 89–99. https://doi.org/10.1080/00220671.1991.10702818. Bennett, S., Maton, K., & Kervin, L. (2008). The ‘digital natives’ debate: A critical review of the evidence. British Journal of Educational Technology, 39(5), pp. 775-786). https://doi.org/10.1111/j.1467-8535.2007.00793.x. Bogaerds-Hazenberg, S., Evers-Vermeul, J., & van den Bergh, H. (2020). A meta-analysis on the effects of text structure instruction on reading comprehension in the upper elementary grades. Reading Research Quarterly, 56(3), pp. 435-462. https://doi.org/10.1002/rrq.311. Brown, A. L. & Day, J.D. (1983). Macrorules for summarizing texts: The development of expertise. Journal of Verbal Learning and Verbal Behavior, 22, pp. 1-14. https://doi.org/10.1016/S0022-5371(83)80002-4. Carpenter, S. K., Cepeda, N. J., Rohrer, D., Kang, S. H. K., & Pashler, H. (2012). Using spacing to enhance diverse forms of learning: Review of recent research and implications for instruction. Educational Psychology Review, 24(3), pp. 369–378. https://doi.org/10.1007/s10648-012-9205-z. Carr, E., & Ogle, D. (1987). K-W-L Plus: A strategy for comprehension and summarization. Journal of Reading, 30(7), pp. 626-631. https://eric.ed.gov/?id=EJ350560. Cepeda, N. J., Coburn, N., Rohrer, D., Wixted, J. T., Mozer, M. C., & Pashler, H. (2009). Optimizing distributed practice: Theoretical analysis and practical implications. Experimental Psychology, 56(4), pp. 236–246. https://doi.org/10.1027/1618-3169.56.4.236. Chang, K., Sung, Y. & Chen, I. (2002). The effect of concept mapping to enhance text comprehension and summarization. The Journal of Experimental Education, 71(1), pp. 5-23. https://doi.org/10.1080/00220970209602054. Clark, J., Kirschner, P. A., Sweller, J. (2012). Putting students on a path to learning: The case for fully guided instruction. American Educator, 36(1), pp. 6-11. https://eric.ed.gov/?id=EJ971752. Clark, J. M., & Paivio, A. (1991). Dual coding theory and education. Educational Psychology Review, 3(3), pp. 149-210.  https://doi.org/10.1007/BF01320076. Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning & Skills Research Centre, London.  Cuevas, J. A. (2015). Is learning styles-based instruction effective? A comprehensive analysis of recent research on learning styles. Theory and Research in Education, 13(3), pp. 308-333. https://doi.org/10.1177/1477878515606621. Cuevas, J. A. (2016a). An analysis of current evidence supporting two alternate learning models: Learning styles and dual coding. Journal of Educational Sciences & Psychology, 6(1), pp. 1-13. https://www.researchgate.net/publication/301692526. Cuevas, J. A. (2016b). Cognitive psychology’s case for teaching higher order thinking. Professional Educator, 15(4), pp. 4-7. https://www.academia.edu/28947876. Cuevas, J. A. (2017). Visual and auditory learning: Differentiating instruction via sensory modality and its effects on memory. In Student Achievement: Perspectives, Assessment and Improvement Strategies (pp. 29 – 54). Nova Science Publishers. ISBN-13: 978-1536102055. Cuevas, J. A. (2019). Addressing the crisis in education: External threats, embracing cognitive science, and the need for a more engaged citizenry. In Nata, R.V. (Ed.), Progress in Education (pp. 1 – 38). (Vol. 55). Nova Science Publishers. ISBN: 978-1-53614-551-9. Cuevas, J. A., Childers, G., & Dawson, B. L. (2023). A rationale for promoting cognitive science in teacher education: Deconstructing prevailing learning myths and advancing research-based practices. Trends in Neuroscience and Education. https://doi.org/10.1016/j.tine.2023.100209. Cuevas, J. A., & Dawson, B. L. (2018). A test of two alternative cognitive processing models: Learning styles and dual coding. Theory and Research in Education, 16(1), pp. 40-64.  https://doi.org/10.1177/1477878517731450. Dawson, B. L., & Cuevas, J. A. (2019). An assessment of intergroup dynamics at a multi-campus university: One university, two cultures. Studies in Higher Education, 45(6), pp. 1047 – 1063. https://doi.org/10.1080/03075079.2019.1628198. Day, J. (1986). Teaching summarization skills: Influences of student ability level and strategy difficulty. Cognition and Instruction, 3(3), pp. 193-210. https://doi.org/10.1207/s1532690xci0303_3. Deci, E., Vallerand, R., Pelletier, L., & Ryan, R. (1991). Motivation and education: The self-determination perspective. Educational Psychologist, 26(3&4), pp. 325-346. https://doi.org/10.1080/00461520.1991.9653137. Deci, E., Koestner, R., & Ryan, R. (2001). Extrinsic rewards and intrinsic motivation in education: Reconsidered once again. Review of Educational Research, 7(1), pp. 1-27. https://doi.org/10.3102/00346543071001001. Demirbilek, M., & Talan, T. (2017). The effect of social media multitasking on classroom performance. Active Learning in Higher Education, 19(2), pp. 117-129. https://doi.org/10.1177/1469787417721382. Di Virgilio, G. & Clarke, S. (1997). Direct interhemispheric visual input to human speech areas. Human Brain Mapping, 5, pp. 347-354. https://doi.org/10.1002/(SICI)1097-0193(1997)5:5<347::AID-HBM3>3.0.CO;2-3. Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), pp. 4-58. https://doi.org/10.1177/1529100612453266. Dunlosky, J., Rawson, K. A., & Middleton, E. L. (2005). What constrains the accuracy of metacomprehension judgments? Testing the transfer-appropriate-monitoring and accessibility hypotheses. Journal of Memory and Language, 52, pp. 551-565. https://doi.org/10.1016/j.jml.2005.01.011. Fiebach, C. J. & Friederici, A. D. (2003). Processing concrete words: fMRI evidence against a specific right-hemisphere involvement. Neuropsychologia, 42(1), pp. 62 -70. https://doi.org/10.1016/S0028-3932(03)00145-3.  Furey, W. (2020). The stubborn myth of “learning styles”: State teacher-license prep materials peddle a debunked theory. Education Next, 20(3), pp. 8-12. https://www.educationnext.org/. Furtak, E. M., Seidel, T., Iverson, H., & Briggs, D. C. (2012). Experimental and quasi-experimental studies of inquiry-based science teaching: A meta-analysis. Review of Educational Research, 82, pp. 300 – 329. https://doi.org/10.3102/0034654312457206. Gill, A., Trask-Kerr, K., & Vella-Brodrick, D. (2021). Systematic review of adolescent conceptions of success: Implications for wellbeing and positive education. Educational Psychology Review, 33, pp. 1553-1582. https://doi.org/10.1007/s10648-021-09605-w. Goldhaber, D. (2019). Evidence-based teacher preparation: Policy context and what we know. Journal of Teacher Education, 70(2), pp. 90–101. https://doi.org/10.1177/0022487118800712. Hagaman, J. L., Casey, K. J., Reid, R. (2016). Paraphrasing strategy instruction for struggling readers. Preventing School Failure: Alternative Education for Children and Youth, 60, pp. 43–52. http://dx.doi.org/10.1080/1045988X.2014.966802. Higher Education Research Institute. (2015, October). HERI research brief. https://www.heri.ucla.edu/briefs/DLE/DLE-2015-Brief.pdf. Hite, R., Jones, M.G., Childers, G., Chesnutt, K., Corin, E., & Pereyra, M. (2017). Pre-service and in-service science teachers’ technological acceptance of 3D, haptic-enabled virtual reality instructional technology. Electronic Journal of Science Education, 23(1), pp. 1-34. https://eric.ed.gov/?id=EJ1203195. Hodes, C. L. (1998). Understanding visual literacy through visual informational processing. Journal of Visual Literacy, 18(2), pp. 131-136. https://doi.org/10.1080/23796529.1998.11674534. Jones, C., Ramanau, R., Cross, S., & Healing, G. (2010). Net generation or digital natives: Is there a distinct new generation entering university? Computers & Education, 54(3), pp. 722-732. https://doi.org/10.1016/j.compedu.2009.09.022. Junco, R. & Cotten, S. (2011). No A 4 U: The relationship between multitasking and academic performance. Computers & Education, 59(2), pp. 505-514. https://doi.org/10.1016/j.compedu.2011.12.023. Just, M. A., & Carpenter, P. A. (1992.) A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 99, pp. 122-149. https://doi.org/10.1037/0033-295X.99.1.122. Karpicke, J. D., & Grimaldi, P. J. (2012). Retrieval-based learning: A perspective for enhancing meaningful learning. Educational Psychology Review, 24(3), pp. 401–418. https://doi.org/10.1007/s10648-012-9202-2. Kintsch, W., & van Dijk, T. (1978). Toward a model of text comprehension and production. Psychological Review, 85, pp. 363-394. https://doi.org/10.1037/0033-295X.85.5.363. Kirschner, P. A., & De Bruyckere, P. (2017). The myths of the digital native and the multitasker. Teaching and Teacher Education, 67, pp. 135-142. https://doi.org/10.1016/j.tate.2017.06.001. Kirschner, P. A. & Hendrick, C. (2020). How learning happens: Seminal works in educational psychology and what they mean in practice. New York, NY: Rutledge. Kirschner, P. A., Sweller, J., Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist. 46(2), pp. 75-86. https://doi.org/10.1207/s15326985ep4102_1. Kirschner, P. A. & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education, Educational Psychologist, 48(3), pp. 169-183. https://doi.org/10.1080/00461520.2013.804395. Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), pp. 1121–1134. https://doi.org/10.1037/0022-3514.77.6.1121. Larsen, D. P. (2018). Planning education for long-term retention: The cognitive science and implementation of retrieval practice. Seminars in Neurology, 38(4), pp. 449–456. https://doi.org/10.1055/s-0038-1666983. Latimier, A., Peyre, H., Ramus, F. (2021). A meta-analytic review of the benefit of spacing out retrieval practice episodes on retention. Educational Psychology Review, 33, pp. 959-987. https://doi.org/10.1007/s10648-020-09572-8. Lee, F. J., & Taatgen, L. (2002). ‘Multitasking as Skill Acquisition’, CogSci’02: Proceedings of the Cognitive Science Society, August 2002. Leopold, C., & Leutner, D. (2012). Science text comprehension: Drawing, main idea selection, and summarizing as learning strategies. Learning and Instruction, 22, pp. 16–26. https://doi.org/10.1016/j.learninstruc.2011.05.005. Lethaby, C., & Harries, P. (2016). Learning styles and teacher training: Are we perpetuating neuromyths? ELT Journal, 70(1), pp. 16–27. https://doi.org/10.1093/elt/ccv051. Lorch, R., & Pugzles Lorch, E. (1996). Effects of headings on text recall and summarization. Contemporary Educational Psychology, 21(3), pp. 261-278. https://doi.org/10.1006/ceps.1996.0022. Macdonald, K., Germine, L., Anderson, A., Christodoulou, J., & McGrath, L. M. (2017). Dispelling the myth: Training in education or neuroscience decreases but does not eliminate beliefs in neuromyths. Frontiers in Psychology. 8:1314. https://doi.org/10.3389/fpsyg.2017.01314. Margaryan, A., Littlejohn, A., & Vojt, G. (2011). Are digital natives a myth or reality? University students’ use of digital technologies. Computers & Education, 56(2), pp. 429-440. https://doi.org/10.1016/j.compedu.2010.09.004. Mayer, R. (2004). Should there be a three-strikes rule against pure discovery learning? The case for guided methods of instruction. American Psychologist, 59(1), pp. 14-19. https://doi.org/10.1037/0003-066X.59.1.14. McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger, H. L. (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology, 103(2), pp. 399–414. https://doi.org/10.1037/a0021782. Mercimek B., Akbulut, Y., Dönmez, O., & Sak, U. (2020). Multitasking impairs learning from multimedia across gifted and non-gifted students. Educational Technology, Research and Development, 68(3), pp. 995-1016. https://doi.org/10.1007/s11423-019-09717-9. Nancekivell, S. E., Sun, X., Gelman, S. A., & Shah, P. (2021). A slippery myth: How learning style beliefs shape reasoning about multimodal instruction and related scientific evidence. Cognitive Science. https://doi.org/10.1111/cogs.13047. National Research Council. (2018). How people learn II: Learners, contexts, and cultures. Washington, DC: National Academy Press. Newton, P.M., & Salvi, A. (2020). How common is belief in the learning styles neuromyth, and does it matter? A pragmatic systematic review. Frontiers in Education. 5:602451.  https://doi.org/10.3389/feduc.2020.602451. Pashler, H., McDaniel, M., Rohrer, D., Bjork, R. (2009). Learning styles: Concepts and evidence. Psychological Science in the Public Interest 9: pp. 105–119. https://doi.org/10.1111/j.1539-6053.2009.01038.x. Paivio, A. (1969). Mental imagery in associative learning and memory. Psychological Review, 76, pp. 241-263. Paivio, A. (1986). Mental representations. New York, NY: Oxford University Press. Pennycook, G., & Rand, D. G. (2019). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of Personality, 88(2). pp. 185-200. https://doi.org/10.1111/jopy.12476. Pennycook, G., Ross, R. M., Koehler, D. J., & Fugelsang, J. A. (2017). Dunning-Kruger effects in reasoning: Theoretical implications of the failure to recognize incompetence. Psychonomic Bulletin and Review 24(6). pp. 1774-1784. https://doi.org/10.3758/s13423-017-1242-7. Perin, D., Lauterbach, M., Raufman, J., & Kalamkarian, H. S. (2017). Text-based writing of low-skilled postsecondary students: Relation to comprehension, self-efficacy and teacher judgments. Reading and Writing, 30(4), pp. 887-915. https://doi.org/10.1007/s11145-016-9706-0. Prensky, M. (2001). Digital natives, digital immigrants part 1. On the Horizon, 9(5), pp. 1-6. http://dx.doi.org/10.1108/10748120110424816. Prensky, M. (2012). Digital natives to digital wisdom: Hopeful essays for 21st century learning. Thousand Oaks, CA: Corwin. Riener, C. & Willingham, D. (2010). The myth of learning styles. Change 42(5): pp. 32-35.  Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1), pp. 20-27. https://doi.org/10.1016/j.tics.2010.09.003.  Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning taking memory tests improves long-term retention. Psychological Science, 17(3), pp. 249-255. https://doi.org/10.1111/j.1467-9280.2006.01693.x. Rogers, J., & Cheung, A. (2020). Pre-service teacher education may perpetuate myths about teaching and learning. Journal of Education for Teaching, 46(3), pp. 417-420. https://doi.org/10.1080/02607476.2020.1766835. Rogowsky, B.A., Calhoun, B.M., & Tallal, P. (2015). Matching learning style to instructional method: Effects on comprehension. Journal of Educational Psychology 107(1): pp. 64–78. https://doi.org/10.1037/a0037478. Rogowsky, B.A., Calhoun, B.M., & Tallal, P. (2020). Providing instruction based on students’ learning style preferences does not improve learning. Frontiers in Psychology. 11:164. https://doi.org/10.3389/fpsyg.2020.00164. Rohrer, D., & Pashler, H. (2010). Recent research on human learning challenges conventional instructional strategies. Educational Researcher, 39(5), pp. 406-412.  https://doi.org/10.3102/0013189X1037477. Rohrer, D. & Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education 46(7): pp. 634–635. https://eric.ed.gov/?id=ED535732. Rosenshine, B. V. (2012). Principles of instruction: Research-based strategies that all teachers should know. American Educator, 36(1), pp. 12-19. https://eric.ed.gov/?id=EJ971753. Rousseau, L. (2021). Interventions to dispel neuromyths in educational settings—A review. Front. Psychol. 12:719692. https://doi.org/10.3389/fpsyg.2021.719692. Ryan, R., & Deci, E. (2000). Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemporary Educational Psychology, 25, pp. 54-67. https://doi.org/10.1006/ceps.1999.1020. Ryan, R. M., & Deci, E. L. (2009). Promoting self-determined school engagement: Motivation, learning, and well-being. In K. R. Wenzel & A. Wigfield (Eds.), Handbook of motivation at school (pp. 171–195). Routledge/Taylor & Francis Group. Rowland, C. A. (2014). The effect of testing versus restudy on retention: A meta-analytic review of the testing effect. Psychological Bulletin, 140(6), pp. 1432-1463. https://doi.org/10.1037/a0037559. Sadoski, M., Goetz, E.T., & Avila E. (1995). Concreteness effects in text recall: Dual coding or context availability? Reading Research Quarterly 30(2): pp. 278-288. https://doi.org/10.2307/748038. Sana, F., Weston, T., & Cepeda, N. (2012). Laptop multitasking hinders classroom learning for both users and nearby peers. Computers & Education, 62, pp. 24-31. https://doi.org/10.1016/j.compedu.2012.10.003. Scott, C. (2010). The enduring appeal of ‘learning styles’. Australian Journal of Education 54(1): pp. 5-17. https://doi.org/10.1177/000494411005400102. Seabrook, R., Brown, G. D. A., & Solity, J. E. (2005). Distributed and massed practice: From laboratory to classroom. Applied Cognitive Psychology, 19(1), pp. 107-122. https://doi.org/10.1002/acp.1066. Sharps, M. J., & Price, J. L. (1992). Auditory imagery and free recall. The Journal of General Psychology, 119(1), pp. 81-87. https://doi.org/10.1080/00221309.1992.9921160. Shelton, A., Lemons, C., & Wexler, J. (2020). Supporting main idea identification and text summarization in middle school co-taught classes. Intervention in School and Clinic, 56(4), pp. 217-223. https://doi.org/10.1177/1053451220944380. Smith, J., Skrbis, Z., & Western, M. (2012). Beneath the ‘Digital Native’ myth: Understanding young Australians’ online time use. Journal of Sociology, 49(1), pp. 97-118. https://doi.org/10.1177/1440783311434856. Solis, M., Ciullo, S., Vaughn, S., Pyle, N., Hassaram, B., & Leroux, A. (2012). Reading comprehension interventions for middle school students with learning disabilities: A synthesis of 30 years of research. Journal of Learning Disabilities, 45(4), pp. 327-340. https://doi.org/10.1177/0022219411402691. Stevens, E. A., Park, S., Vaughn, S. (2019). A review of summarizing and main idea interventions for struggling readers in grades 3 through 12: 1978-2016. Remedial and Special Education, 40, pp. 131-149. https://doi.org/10.1177/0741932517749940. Stockard, J., Wood, T.W., Coughlin, C., & Khoury, C. R. (2018). The effectiveness of direct instruction curricula: A meta-analysis of a half century of research. Review of Educational Research. 88(4), pp. 479-507. https://journals.sagepub.com/doi/10.3102/0034654317751919. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), pp. 257-285. https://doi.org/10.1207/s15516709cog1202_4. Sweller, J. (2011). Human cognitive architecture: Why some instructional procedures work and others do not. In K. Harris, S. Graham, & T. Urdan (Eds.), APA Educational Psychology Handbook (Vol. 1). Washington, DC: American Psychological Association. Sweller, J. (2016). Working memory, long-term memory, and instructional design. Journal of Applied Research in Memory and Cognition, 5. pp. 360-367. https://doi.org/10.1016/j.jarmac.2015.12.002. Sweller, J., van Merriënboer, J. J. G., & Paas, F. (1998). Cognitive architecture and instructional design. Educational Review, 10. pp. 251-296. https://doi.org/10.1023/A:1022193728205. Thiede, K. W., Anderson, M., & Therriault, D. (2003). Accuracy of metacognitive monitoring affects learning of texts. Journal of Educational Psychology, 95, pp. 66-73. https://doi.org/10.1037/0022-0663.95.1.66. Torrijos-Muelas, M., González-Víllora, S., & Bodoque-Osma, A. R. (2021). The persistence of neuromyths in the educational settings: A systematic review. Front. Psychol. 11:591923. https://doi.org/10.3389/fpsyg.2020.591923. Vasconcellos, D., Parker, P. D., Hilland, T., Cinelli, R., Owen, K. B., Kapsal, N., Lee, J., Antczak, D., Ntoumanis, N., Ryan, R. M., & Lonsdale, C. (2020). Self-determination theory applied to physical education: A systematic review and meta-analysis. Journal of Educational Psychology, 112(7), pp. 1444-1469. https://doi.org/10.1037/edu0000420. Wang, S., Hsu, H., Campbell, T., Coster, D., & Longhurst, M. (2014). An investigation of middle school science teachers and students use of technology inside and outside of classrooms: Considering whether digital natives are more technology savvy than their teachers. Educational Technology Research and Development, 62(6), pp. 637-662. https://doi.org/10.1007/s11423-014-9355-4. Wang, Z. & Tchernev, J. (2012). The “myth” of media multitasking: Reciprocal dynamics of media multitasking, personal needs, and gratifications. Journal of Communication, 62, pp. 493-513. https://doi.org/10.1111/j.1460-2466.2012.01641.x. Welcome S.E., Paivio, A., McRae, K., & Joanisse, M. F. (2011). An electrophysiological study of task demands on concreteness effects: Evidence for dual coding theory. Experimental Brain Research 212(3): pp. 347-358. https://doi.org/10.1007/s00221-011-2734-8. Westby, C., Culatta, B., Lawrence, B., & Hall-Kenyon, K. (2010). Summarizing expository texts. Topics in Language Disorders, 30(4), 275287. https://eric.ed.gov/?id=EJ906737. Willingham, D. (2017). Ask the cognitive scientist: Do manipulatives help students learn? American Educator, 41(3), 25-40. https://www.aft.org/ae/fall2017/willingham. Willingham, D. (2019). Ask the Cognitive Scientist: Should Teachers Know the Basic Science of How Children Learn? American Educator, 43(2), pp. 30-36. https://www.aft.org/ae/summer2019/willingham. Wood, E., & Zivcakova, L. (2015). Multitasking in educational settings. In L. D. Rosen, N. A. Cheever, & M. Carrier (Eds.), The Wiley handbook of psychology, technology and society (pp. 404-419). Hoboken, NJ: John Wiley & Sons, Inc. Wooten, J. O., & Cuevas, J. A. (2024). The effects of dual coding theory on social studies vocabulary and comprehension in a 5th grade classroom. International Journal on Social and Education Sciences (IJonSES), 6(4), pp. 673-691. https://doi.org/10.46328/ijonses.696. Zhang, W. (2015). Learning variables, in-class laptop multitasking and academic performance: A path analysis. Computers & Education, 81, pp. 82-88. https://doi.org/10.1016/j.compedu.2014.09.012.   Appendix A 10 Pedagogy Knowledge Items Myths: • Learning styles: When classroom instruction is designed to appeal to students’ individual learning styles, students are more likely to learn more. (negatively coded/false) • Discovery learning: Students learn best if they discover information for themselves through activities with minimal guidance when they can randomly experiment on their own. (negatively coded/false) • Extrinsic Motivation: Students’ long-term learning outcomes are likely to be better if teachers and professors stimulate extrinsic motivation through things such as rewards. (negatively coded/false) • Multitasking: Incorporating instruction that involves students in multitasking activities, such as when they think about multiple concepts at once, leads to better learning outcomes. (negatively coded/false) • Digital natives: Students learn better when digital tools are incorporated within instruction practice because students today are naturally more adept at technology due to having used it from such a young age. (negatively coded/false) Effective Approaches: • Dual coding/Imagery for text: It is generally true that students’ memory of lesson content tends to be stronger if visuals and images are used to supplement class lectures, discussions, and readings. (true) • Summarisation: When students summarise content by reducing information into concise, essential ideas, it helps build students’ knowledge and strengthen skills (true) • Practice testing: Quizzes and practice tests using open-ended questions tend to boost learning even if students do not do well on them. (true) • Direct instruction: Direct instruction tends to be an ineffective method for teaching content to students. (negatively coded/false) • Spacing: Students tend to learn more when instruction is delivered via focused, intensive sessions delivered over a brief time period rather than when information is spread out and revisited over longer time spans. (negatively coded/false)   Appendix B 5 metacognition Items (Cronbach’s alpha = .787) • My teaching is heavily influenced by a strong familiarity with the research on how students learn. • I am confident that I am knowledgeable of the best practices to enhance student learning. • I am confident that I utilise the best practices to enhance student learning. • I feel less familiar with teaching strategies and best practices compared to my colleagues. (negatively coded) • I am not always confident that my knowledge of pedagogy and how students learn is strong as it could be. (negatively coded)   Declarations and Compliance with Ethical Standards Ethics Approval: All procedures performed were in accordance with established ethical standards and were approved by the University of North Georgia Institutional Review Board. Informed Consent:  Informed consent was obtained from all required participants included in the study. Competing Interests: This research was not granted-related. The authors declare that they have no conflict of interest. Funding: This research was not funded. Data Availability: The data associated with this study are owned by the University of North Georgia. Interested parties may contact the first or second author regarding availability and access, and requests will be considered on an individual basis according to institutional guidelines. Anonymised data output sheets are available by contacting the first or second author of the study.   Gallery

  • General Meta Tags

    8
    • title
      Research and Education
    • charset
      UTF-8
    • viewport
      width=device-width,initial-scale=1,maximum-scale=1,user-scalable=yes
    • HandheldFriendly
      true
    • generator
      WordPress 5.6.14
  • Open Graph Meta Tags

    7
    • og:image:secure_url
      https://researchandeducation.ro/wp-content/uploads/2025/06/Graphical_Abstract_Cuevas_Dawson_Childers_REd_No12.jpg
    • og:image:type
    • og:image:width
      1650
    • og:image:height
      1275
    • og:description
      Download PDF Article Download Graphical Abstract Discerning Myths from Methods: University Faculty’s Understanding of Learning Science and Metacognition on Pedagogy     Abstract This study assessed the pedagogical knowledge and metacognitive awareness of pedagogy of faculty (N = 107) at a large state university in the United States. The purpose was to ascertain whether faculty could distinguish effective learning practices from ineffective ones, as determined by empirical research in learning science. Faculty responded to items regarding the efficacy of effective practices and others shown by research to be neuromyths or misconceptions. Faculty across all colleges correctly identified most of the effective practices but also endorsed myths/misconceptions, ultimately showing limited pedagogical knowledge. Tenured faculty showed stronger pedagogical knowledge than newer faculty. Faculty were also assessed on their confidence in their knowledge of pedagogical practices. Respondents demonstrated poor metacognitive awareness as there was no relationship between confidence in pedagogical knowledge and actual pedagogical knowledge. Surprisingly, education faculty scored no better in pedagogical knowledge than faculty of any other college and also showed low metacognitive awareness. Results indicate that universities preparing doctoral students for faculty positions should ensure candidates are exposed to accurate information regarding learning science. The implications for colleges of education are more dire in that they may be failing to prepare candidates in the most essential aspects of the field.   Keywords pedagogy, learning science, cognitive myths, metacognition, faculty, teaching methods JEL Classification I20, I21, I23   1. Introduction Teaching, whether at the university level or K-12 level, is generally considered to be comprised of two broad fields of knowledge: content knowledge and pedagogical knowledge. There is little doubt as to the relative value of content knowledge because instructors would be unable to guide student learning on a subject they had little understanding of themselves. In this scenario the instructor would not be able to recognise or facilitate learning of accurate information, nor would the instructor be able to address misconceptions. Furthermore, without sufficient content knowledge the instructor also cannot effectively scaffold student learning (Kirschner & Hendrick, 2020). In all fields, at the foundational level, content knowledge provides the structural groundwork for teaching and learning. The second of the two broad fields of knowledge essential to teaching, pedagogical knowledge, focuses on how humans learn and the processes that may best facilitate that learning. Pedagogical knowledge may entail sociocultural aspects of learning, such as an understanding of how the learning environment may impact the ways students respond to instruction or how language and culture influence a child’s construction of their own knowledge. Pedagogical knowledge also draws upon cognitive science, or more specifically, what has become known as the learning sciences. This would include knowledge about, for example, how memory processes unfold or how cognitive load may impact a student’s ability to learn unfamiliar information (Sweller, 1988). While there is no debate over the importance of content knowledge in teaching and learning, there is some question regarding whether sufficient attention is devoted to the psychological science on learning that underpins pedagogical knowledge (Cuevas, 2019; Willingham 2019). Certainly, the curricula for those who are preparing to be university instructors in any field, e.g., physics, history, etc., focus predominantly on content knowledge, as PhD programmes are expected to produce professors with the highest-level of content knowledge possible. In contrast, programmes that prepare K-12 teachers attempt a greater balance with more attention paid to how to teach material given the developmental level and background of the learner. However, research over previous decades has shown that many educators maintain beliefs that can be classified as cognitive myths (Kirschner & van Merriënboer, 2013; Rogers & Cheung, 2020; Rousseau, 2021). One large study (N = 3877) found that neuromyths, or misconceptions about education and learning, were prevalent among educators, though the effect was moderated by higher levels of education and having exposure to peer reviewed science (Macdonald et al., 2017).   2. Literature Review A recent systematic review concluded that nearly 90% of educators maintained a belief in neuromyths about learning styles (Newton & Salvi, 2020), and other research has found that neuromyths in general may be particularly resistant to revision among those in teacher education programmes (Rogers & Cheung, 2020). Research also suggests the belief in neuromyths among educators to be so prevalent that interventions are now being proposed to correct those misconceptions (Rousseau, 2021). If misconceptions about how humans learn are as widespread as research seems to indicate, then the issue would be twofold: instructors would likely be incorporating ineffective methods into their lessons due to the mistaken assumption that such methods will benefit student learning, while they also bypass the use of strategies shown to be effective. Thus, in practice, ineffective methods would displace the instructional strategies known to enhance learning (Rohrer & Pashler, 2010). Therefore, we reasoned that an understanding of pedagogy based on learning science would be reflected by two characteristics: 1) whether faculty were familiar with well-established learning concepts, and 2) whether they held misconceptions regarding how students learn. In this study we documented the level of pedagogical knowledge of faculty at a large state university in the United States with a teaching emphasis by assessing their understanding of basic learning concepts and misconceptions about learning. A second, yet essential component of this research, was to measure faculty’s level of metacognitive awareness of their pedagogical knowledge, as an understanding of their own knowledge of how learning occurs is likely to influence their willingness to search out more effective practices (Pennycook et al., 2017). Learning Concepts We identified ten learning concepts, five of which were well-established by research as beneficial to learning and five of which research has debunked and would be classified as myths or misconceptions. In considering the different learning concepts, we selected concepts for which there would be little debate among learning scientists regarding their efficacy. Many options were not included because the extent of their effectiveness may depend on nuances regarding their delivery, and as such may be effective in some circumstances but not in others. For instance, the use of manipulatives shows much promise for students in certain content areas and of certain ages but is not necessarily universally beneficial in all circumstances (Willingham, 2017). The ten we chose were applicable to all content areas and age groups. The following sections constitute succinct summaries of what we deem to be the current research consensus on each of the learning concepts included in the study. More in-depth explanations of each concept can be found in Cuevas, et al., 2023.   Myths and Misconceptions Multitasking  Multitasking is defined as the ability to manage and actively participate in multiple actions simultaneously (Lee & Taatgen, 2002; Wood & Zivcakova, 2015), and a critical analysis of multitasking relies on a framework for working memory and cognitive load (Sweller, 2011). Working memory is an individual’s capacity to store and manipulate information over a short period of time (Just & Carpenter, 1992) and is described as a “flexible mental workspace” (Alloway et al., 2006, p. 1698); cognitive load refers to the capacity for processing information imposed by working memory (Sweller, 2011). As learners manipulate and process information within working memory, the cognitive load bearing increases. Generally, if the learning environment has too many unnecessary components, this would create extraneous cognitive load, negatively affecting the capacity of a student to learn. Mercimek et al.’s (2020) findings showed multitasking actually impedes students’ learning. Other research indicated that engaging in multitasking while completing academic tasks had a negative impact on college GPA (Junco & Cotten, 2011), exam grades (Sana et al., 2012), and class grades (Demirbilek & Talan, 2017; Zhang, 2015), suggesting an impairment in cognitive processing. Wang and Tchernev (2012) also note that multitasking results in diminished cognitive performance as memory and cognitive load are taxed by competing stimuli, negatively affecting outcomes; therefore, encouraging multitasking is likely to be detrimental to students.  Learning Styles The notion that individuals have learning styles and that tailoring instruction to these modalities can enhance students’ learning is among the most persistent cognitive myths (Kirschner & van Merriënboer, 2013; Riener & Willingham, 2010; Torrijos-Muelas et al., 2021), one that can have detrimental effects on students (Scott, 2010). Decades after learning styles-based instruction found wide use in educational settings across a wide spectrum of grade levels and in many countries, exhaustive reviews suggest that nearly all available research evidence has indicated that learning styles do not exist and that adapting instruction to them has no educational benefits (Coffield et al., 2004; Cuevas, 2015, 2016a; Pashler et al., 2009; Rohrer & Pashler, 2012). Well-designed experiments testing the hypothesis have continually debunked the premise, concluding that teaching to learning styles does not enhance learning (Cuevas & Dawson, 2018; Rogowsky et al., 2015, 2020). The persistent myth of learning styles has been identified as substantial problem in education that impacts teacher training and the quality of instruction across k-12 and college classrooms (Cuevas, 2017) with some researchers recently exploring ways to dispel such detrimental neuromyths (Nancekivell et al., 2021; Rousseau, 2021). Digital Natives  Recent generations (e.g., “Millennials” and “Gen Z”) have been characterised as being inundated with technology since birth, and as a result, Prensky (2001) suggested that these learners were digital natives who may learn best when teachers integrate technology into instruction. However, researchers have concluded that these claims are not grounded in empirical evidence or informed by sound theoretical perspectives (Bennett et al., 2008; Jones et al., 2010). Margaryan et al. (2011) similarly found no evidence that this generation of learners had differing learning needs. Kirschner and De Bruyckere (2017) contend that the concept of digital natives is a myth as these learners often do not utilise technological tools effectively and the use of technology can actually adversely affect knowledge and skill acquisition. Wang et al. (2014) concluded that while students may know how to use technology fluidly within the context of entertainment or social media, they still need guidance from teachers to use technology to support learning. Both Smith et al. (2012) and Hite et al. (2017) came to the conclusion that modern students must be taught to use technology for learning purposes just as previous generations were, and its use does not come naturally in learning environments. Prensky (2012) later revised the concept of digital natives, acknowledging that the premise lacks empirical support. Ultimately, education research suggests that the concept of students being digital natives lacks merit. Pure Discovery Learning In their purest form, discovery learning and other similar methods of instruction such as inquiry learning and project-based instruction are designed to focus on maximum student involvement with minimal guidance from the instructor (Clark et al., 2012; Mayer, 2004). Despite the popularity of such approaches, decades of empirical research have shown that minimally guided instruction is not effective at enhancing student performance (Mayer, 2004) and is not consistent with the cognitive science on human learning (Sweller et al., 1998). Learning is defined by change occurring in long-term memory (Kirschner & Hendrick, 2020) which could encompass higher order processes and updates to episodic, semantic, and procedural memory (Cuevas, 2016b). But because students are tasked with navigating unfamiliar territory on their own during discovery-type learning, such minimally guided instruction places a heavy burden on working memory (Kirschner et al., 2006), leaving fewer cognitive resources available to contribute to encoding new information into long-term memory. Cognitive Load Theory suggests that effective instructional methods should decrease cognitive load and that approaches that instead tax working memory and increase cognitive load, as unguided methods do, result in subsequently less learning (Kirschner & Hendrick, 2020; Sweller, 2016).  Extrinsic Motivation  According to Deci et al. (1991) motivation within an education context entails student interest, capacities, and a sense of valuing learning and education. Motivational tendencies can be described as either intrinsic or extrinsic. Educational motivation can manifest in students completing tasks because of curiosity, awards, interest in a topic, approval from a parent or teacher, enjoyment in learning a new skill, or receiving a good grade (Ryan & Deci, 2000). Educators must attempt to foster students’ motivation which may result in extrinsic strategies, such as rewards to promote learning (Ryan & Deci, 2009). However, the use of extrinsic motivators can negatively affect students’ motivation if learning is contingent on rewards (Deci et al., 2001; National Research Council, 2018). Instead, teachers should focus on fostering intrinsic motivation by helping students develop academic goals and monitor learning progress while encouraging autonomy and choice (National Research Council, 2018). Gill et al. (2021) found a positive relationship between intrinsic motivational factors and the development of goals and well-being, suggesting that educators should focus on intrinsic motivation as a basis for learning. Vasconcellos et al. (2020) concluded that external motivators were negatively associated with adaptive outcomes and positively associated with maladaptive ones. Decades of research on extrinsic rewards suggest that they do not support learning or healthy motivational behaviours long term, and thus, intrinsic motivational factors should supplant them to promote learning. Established Learning Principles Retrieval Practice and the Testing Effect Research has clearly demonstrated that having students engage in retrieval practice, when they are tasked with attempting to retrieve learned information from memory, improves long-term retention (Roediger & Butler, 2011; Roediger & Karpicke, 2006; Rowland, 2014). Meta analyses indicate that the use of retrieval practice is more effective than re-studying for both simple and complex information (Karpicke & Grimaldi, 2012; McDaniel et al., 2011). Retrieval practice often takes the form of practice tests and quizzing, and even just a single retrieval session is sufficient to stimulate stronger retention of information than not engaging in testing at all. More than a century of empirical research on what is known as the testing effect has consistently indicated that the use of practice tests, either as a classroom activity or a form of studying, promotes increased learning and retention compared to more commonly used study strategies (Roediger & Karpicke, 2006). Meta analyses have found that retrieval practice tends to produce the strongest effects in mathematics but that it impacts learning across all content areas, and its positive effects are intensified when students are provided with feedback in response to the practice tests (Bangert-Drowns et al., 1991). Practice testing also produces substantial benefits in enhancing students’ free recall and long-term retention while reducing forgetfulness (Roediger & Karpicke, 2006). Dual Coding Decades of research have firmly established that pairing images with verbal or textual information assists learning and retention of information. This process is explained by Dual Coding Theory, a concept Paivio pioneered in 1969 and expanded upon in 1986. The theory asserts that humans have two separate cognitive systems for processing information, one verbal and one visual, and that when the two are combined there is an additive effect that allows for greater retention than would be possible with just one of the systems being incorporated (Clark & Paivio, 1991; Cuevas, 2016a; Kirschner & Hendrick, 2020). The two systems are interconnected but are functionally independent, an important feature because if the two systems did not function independently, cognitive overload would result, as it often does as a consequence of excessive stimuli. Instead, cognitive load remains low when images are used to supplement linguistic information and memory is enhanced due to there being two storage systems. Indeed, Cognitive Load Theory, later developed by Sweller (Kirschner & Hendrick, 2020; Sweller, 1988) relies heavily on Dual Coding Theory. Neurological research has provided evidence for the processes that allow for images to enhance memory (Di Virgilio & Clarke, 1997; Fiebach & Friederici, 2003; Welcome, et al., 2011). Additionally, a great deal of experimental research from the field of cognitive psychology has documented the benefits of dual coding on human learning and its potential for use in educational settings (Cuevas & Dawson, 2018; Hodes, 1998; Sadoski, et al. 1995; Sharps & Price, 1992; Wooten & Cuevas, 2024). Summarisation  The effectiveness of summarisation is linked to metacognition and self-regulation, two essential components of learning (Day, 1986; Leopold & Leutner, 2012). Kintsch and van Dijk (1978) and Brown and Day (1983) proposed important characteristics for summarising or condensing information, specifically, deleting irrelevant information, identifying pertinent information or specific supporting ideas, writing cohesive statements or topic sentences for each idea, and reconstructing ideas. These features constitute a frame for students to translate learned information into their own words (Westby et al., 2010). They are related to evidence-based strategies such as K-W-L charts (Carr & Ogle, 1987), drawing concepts (Leopold & Leutner, 2012), concept maps (Chang et al., 2002), and the use of headings to structure comprehension (Lorch & Pugzles Lorch, 1996). Summarisation strategies have been shown to be particularly helpful to students of varying age groups and ability levels in building comprehension (Hagaman et al., 2016; Shelton et al., 2020; Solis et al. 2011; Stevens et al., 2019), a skill that is vital for student success (Bogaerds-Hazenberg et al., 2020; Perin et al., 2017). Ultimately, decades of research indicate that summarisation assists students in processing information into a condensed structure and is an effective strategy in supporting reading comprehension and developing content knowledge for students. Direct Instruction Teacher-centred strategies such as direct instruction have tended to lose support in pedagogical circles as student-centred forms such as discovery learning have become more popular (Clark, et al. 2012). Yet direct instruction has long been established as being among the most effective forms of instruction (Kirschner & Hendrick, 2020). The method is comprised of well-supported strategies such as the teacher activating background knowledge and fostering introductory focus to start lessons, the modelling of skills and processes for students, providing well-structured explanations which include chunking of information into manageable portions, and guiding independent practice after students have had sufficient support and have become familiarised with the concepts. Each of these are aspects of successful scaffolding, and each is strongly supported by research in cognitive science and educational psychology (Rosenshine, 2012). The evidence for the effectiveness of these different features comprising direct instruction is so thorough that throughout the American Psychological Association’s top 20 principles of teaching and learning (2015) there are sections dedicated to specific components of direct instruction. Additionally, two comprehensive meta-analyses captured the extent of research support for the method. One found consistently significant and positive effects on student learning across 400 studies over 50 years covering all subject areas (Stockard et al. 2018). Another compared the effects of student-centred approaches and direct instruction based on studies across a ten-year period and concluded that the positive effects of direct instruction employing full teacher guidance were far greater than student-driven approaches (Furtak et al. 2012). Spacing Spacing, or distributed learning, occurs when an instructor or student intentionally inserts time intervals between learning sessions based on the same content. Introducing time intervals between study sessions results in stronger retention than massing practice and limits forgetting (Cepeda et al., 2009; Latimier et al., 2021). Research has shown distributed learning to be effective across many different domains, populations, age groups, and development levels, in each case resulting in substantial improvements in long-term retention (Carpenter et al., 2012; Larsen, 2018; Seabrook et al., 2005). Kirschner and Hendrick (2020) argue that distributed practice is among the most well-established procedures for enhancing learning. In one large, well-designed study, Cepeda et al. (2008), concluded that optimal retention occurred when learners studied information on multiple occasions with gaps between different study periods and tests administered at different time intervals. Dunlosky et al. (2013) noted that while spacing is not consistently or intentionally practiced in formal learning environments, the practice has high utility value due to its consistent benefits to students across such a wide range of variables. Rohrer and Pashler (2010) contend that while there is extensive research evidence supporting the effectiveness of using spacing as an instructional strategy, unfortunately, relatively little attention is devoted to its use in practice. Current Study The principal purpose of this study was to assess the pedagogical knowledge of faculty at a large state university in the U.S. with a teaching emphasis, specifically their knowledge of practices for which there is abundant research evidence for or against. Faculty at research universities primarily focus on research output, whereas faculty at teaching universities devote the majority of their time and effort to delivering instruction. Thus, it seemed logical to assess faculty’s knowledge of teaching at a university where teaching, and therefore pedagogy, is the greater emphasis. Additionally, because the field of education is predominantly concerned with pedagogy, ostensibly, education professors would be expected to show the strongest understanding of these concepts, though faculty from all departments within the university were assessed.  A secondary purpose of the study was to gauge professors’ metacognitive awareness by ascertaining their confidence levels in their pedagogical knowledge and whether their self-assessments aligned with their actual level of knowledge. The implication is that if faculty showed high confidence but low knowledge and therefore low levels of metacognition, they would be unaware of their misconceptions. As a result, such faculty would be unlikely to seek out professional development opportunities or investigate approaches that may ultimately improve their instruction. If, on the other hand, faculty showed stronger metacognition, with low confidence and also low knowledge, they would likely be more willing to engage with sources to improve the delivery of their content because they would be aware of their limitations in that regard. Finally, if they showed strong metacognition with high confidence and high levels of knowledge, this would be the ideal scenario and should result in favourable learning outcomes as long as the faculty also had sufficient understanding of content and socio-cultural awareness.    The study was guided by the following research questions: Do faculty members show a strong understanding of well-established concepts regarding learning as established by cognitive science? Which learning concepts do faculty show the most misunderstanding of (i.e., which myths do they tend to endorse or which established concepts do they tend to reject)? Are there differences in the level of knowledge of learning practices between faculty members from different disciplines? For instance, do faculty from education score significantly higher than faculty in other areas in this regard? Are there faculty from certain content areas who show a prevalence for believing in myths or for not being aware of established learning principles? Are there differences in the level of knowledge of learning practices between faculty members according to rank, i.e., university experience? Do faculty show metacognitive awareness of their level of knowledge of teaching and learning practices?   3. Methodology Contextual Factors and Participants  Data were collected from faculty at a midsized public university comprised of five campuses in the southeastern United States with a total enrolment of approximately 18,000 students and 980 faculty at the time of data collection. The student-to-faculty ratio is calculated to be 18:1, and 74% of faculty are full-time, indicating a low proportion of adjunct faculty. According to the Carnegie Classification system, the university is classified under “Master's Colleges & Universities: Larger Programs”. The vast majority of the students enrolled and the degrees the university confers are at the undergraduate level, but the institution also offers master’s and doctoral degrees.  The institution contains a traditional composition of colleges for a state university: Arts and Letters, Business, Education, Health Sciences and Professions, Science and Mathematics, and University College (interdisciplinary studies). The participants were comprised of full-time faculty (N = 107) from each college at the university. The breakdown of responses by college were approximately proportional to the relative size of each college (n = 37, n = 7, n = 14, n = 6, n = 31, n = 4, respectively, with eight declining to identify their college). Respondents were evenly distributed according to rank: Professors (n = 26), Associate Professors (n = 29), Assistant Professors (n = 28), Lecturers/Instructors (n = 22), with 79% being tenured or tenure track and two declining to specify. Of all faculty responding to the broader survey (N = 186), 54.3% identified as women, 39.8% identified as men, 2.2% identified as non-binary or non-conforming, and 3.7% chose not to answer. Data on age were not available.  Design The study used a non-experimental, cross-sectional design that primarily relied on group comparison for the essential analyses. Data were analysed quantitatively using descriptive statistics, analysis of variance, and Pearson correlation. The study did not include an intervention, and data were collected during a single time point. Though data were collected via a survey instrument, an objective assessment of knowledge was used as the primary measure instead of dispositional constructs that would more commonly be the focus of a survey.  Instrument The Diverse Learning Environments (DLE) survey was distributed by the Higher Education Research Institute (HERI) electronically to students and faculty across all five campuses. The DLE is administered by the UCLA HERI to colleges and universities across the United States (HERI, 2015) and was designed to assess stakeholders’ perceptions on constructs such as institutional climate, satisfaction with the academic and work environment, institutional diversity, individual social identity, intergroup relations, and discrimination. A more detailed description of the DLE and related variables can be found in Dawson and Cuevas, 2019. For this study we analysed the variables related to the faculty portion of the survey. The faculty section of the DLE used for this study was comprised of 95 items, including questions about demographic variables, rank and tenure status, course information, involvement in activities such as research and scholarship, instructional strategies, technology, DEI, COVID-19, satisfaction with workplace environment, and salary. The majority of the items utilised a Likert scale, with some Yes/No response items and accompanying follow-up questions. The final 35 items were comprised of “local optional questions” which were added to the survey by the institution beyond the original DLE items in order to address specific questions of interest. The items used for this study were included in the local optional questions and were grouped into two constructs: pedagogical knowledge items or confidence in pedagogical knowledge. The design, selection, and scoring of the items are discussed below. Pedagogical Knowledge Items To assess pedagogical knowledge, a 10-item scale was created. The scale consisted of five common myths and misconceptions about learning and five learning strategies that have been well-established by research. The five myths entailed the following concepts: learning styles, discovery learning, the efficacy of fostering extrinsic motivation, the efficacy of multitasking, and the existence of digital natives. The five well-established learning concepts consisted of the following: dual coding, summarisation, practice testing, direct instruction, and spacing. The rationale was that pedagogical knowledge could be assessed through two general propositions - how many misconceptions an educator held about learning and how many effective approaches they were aware of. The items were limited by two main factors: 1) Due to the length of the overall survey we were only able to insert a small number of additional items because of concerns over time restraints and the respondents’ likelihood of answering a lengthy questionnaire, and 2) there needed to have been clear and well-established research evidence for or against each item with each concept as close to being “settled” science as possible. Concerning this second factor, there are still many learning concepts under scrutiny or that may show efficacy in some circumstances but not others, and if this was the case, then we could not consider them. To ensure that the format of the items was consistent with the rest of the survey items on the DLE, they were presented on a 4-point Likert scale, from “Strongly Disagree” to “Strongly Agree”. However, the responses were scored dichotomously as either correct or incorrect. For instance, for the effective learning approaches, if respondents agreed that they were effective by answering either “agree” or “strongly agree”, they were credited with answering the item correctly. If they answered “disagree” or “strongly disagree” they were credited with an incorrect answer. Alternately, for the myths and misconceptions, if respondents agreed or strongly agreed that they were effective, then that was treated as an incorrect answer, whereas if they disagreed or strongly disagreed that they were effective, then this was scored as a correct answer. The scale can be found in Appendix A.  Confidence in Pedagogical Knowledge Scale  In psychological and educational research, metacognition or metacognitive awareness has traditionally been measured by gauging one’s confidence in their knowledge, skills, or expertise on a topic via a survey and then assessing them through an objective test to ascertain whether the two are positively correlated (Anderson & Thiede, 2008; Dunlosky et al., 2005; Thiede et al., 2003). If the individual’s self-assessment positively correlates with their actual knowledge level, then this would indicate strong metacognition. For instance, if the person rated their own knowledge as high and they scored high on the objective assessment, they would have shown good metacognitive awareness. Likewise, if they rated their own knowledge as low and scored low on the assessment, this would also show good metacognitive awareness because the individual would have provided an accurate self-assessment and would be aware that they were not knowledgeable of the subject. In contrast, an individual would show poor metacognition if there was a negative correlation or no correlation between their self-assessment and the objective assessment. It could be that the person assessed themselves as having low levels of knowledge but actually scored high on the assessment, which may be the case if the individual suffered from anxiety or was simply unsure of their own ability. In this example the individual underestimated their knowledge or ability. But the more common form of a lack of metacognitive awareness would be when a negative correlation between the two measures occurred due to the individual overrating their own knowledge on the self-assessment but scoring poorly on the objective assessment on the topic. In essence, they would have believed themselves to be highly knowledgeable while in reality their knowledge level was low. They did not recognise their own lack of knowledge due to having a limited understanding of the field. This is a very common research finding known as the Dunning-Kruger effect (Kruger & Dunning, 1999; Pennycook et al., 2017) wherein a person overestimates their own competence and is unaware that they lack sufficient knowledge of a subject. When individuals contend that they have high levels of knowledge that they actually do not, it is known as overclaiming (Atir, et al., 2015; Pennycook & Rand, 2019). To assess metacognitive awareness regarding faculty’s knowledge of pedagogical practices and human learning, we developed a five-item confidence scale. These items asked about familiarity with research on learning and the influence this has on their instruction, confidence in their knowledge of best practices, confidence in their use of best practices, and their familiarity with pedagogical practices compared to their peers. Again, these items were presented on a 5-point Likert scale. In this case, a composite score was derived for each faculty member that represented a self-assessment of their familiarity and knowledge of best practices in regard to student learning. This score was then used to conduct a Pearson correlation between confidence level in pedagogical knowledge and actual pedagogical knowledge as determined by the scores on the pedagogical knowledge items to calculate the level of metacognitive awareness of faculty in regard to learning concepts. When tested for reliability, the Cronbach’s alpha coefficient for this scale was .79. The scale can be found in Appendix B.   4. Results Descriptive Statistics for Pedagogical Knowledge by College and Concept In order to address the first research question pertaining to the faculty members’ overall understanding of the 10 pedagogical concepts, a mean score was tabulated for all respondents (N = 107). Individual scores could range from 0, if the respondent answered all questions incorrectly, to 10, if all questions were answered correctly. A correct answer represented agreement that well-established research-based pedagogical concepts were effective or when the respondent disagreed that the myths or misconceptions were effective learning practices. Values were reverse-scored for negatively coded items. The mean score across all faculty on the pedagogical knowledge items was slightly above the midpoint of the scale (M = 6.11). This result does not indicate strong pedagogical knowledge and reveals that faculty answered fewer than 65% of the items correctly on average. Below in Table 1, the descriptive statistics with the mean scores for the pedagogical items of faculty are organised by the participants’ affiliated college. College N Mean Std. Dev Std. Error Arts and Letters 37 6.38 1.77 0.29 Business 7 6.00 1.00 0.38 Education 14 5.93 2.06 0.55 Health Sciences  6 6.33 0.82 0.33 Sciences & Math 31 6.00 1.73 0.31 University College 4 6.00 1.83 0.91 Unidentified 8 5.63 1.30 1.30 Total 107 6.11 1.67 0.16 Range of possible scores: 0 – 10 Table 1. Pedagogical Knowledge Scores by College   The second question addressed the respondents' knowledge of effective learning practices in terms of which learning concepts they demonstrated the most and least understanding of. Across faculty from all colleges, the myth or misconception items that instructors scored most poorly on were learning styles (33% correct) and discovery learning (33% correct), with two-thirds of respondents indicating they believed those practices to benefit learning. Additionally, less than half the faculty answered the questions correctly on multitasking (40.6% correct) and digital natives (43.8% correct). While faculty were not effective at identifying misconceptions related to pedagogy, they were more accurate in identifying effective approaches with nearly all faculty reporting that direct instruction (97.2% correct) and practice testing are beneficial to students (94.4% correct). Furthermore, faculty recognised the importance of spacing (85.7% correct) and, to a lesser extent, summarisation (66.7% correct) and dual coding (65.4% correct). The full breakdown of correct responses by learning concept across all faculty and according to college can be found in Table 2 below. Of note, data in Table 1 are based on means calculated for each college even if a faculty member did not complete all the items, but data in Table 2 required the faculty member to complete all subscales. Because the College of Arts and Letters and the College of Science and Math had several faculties who did not complete each item, the sample size in Table 2 was reduced by 4. Faculty from the College of Education, who should show the most expertise regarding pedagogy, only scored higher than faculty in other colleges regarding myths and misconceptions about one concept. They were more likely to recognise that unguided discovery learning is ineffective (57.1% correct). College of Education faculty scored similarly to faculty from other colleges on all the other concepts, both for the myths and misconceptions and the effective practices. Additionally, College of Education faculty were split regarding the effectiveness of summarisation for students’ learning (50% correct). Faculty across colleges were largely able to identify effective practices, such as retrieval and direct instruction, but scored somewhat lower regarding dual coding and summarisation. Learning Concept All Faculty (N =103) Arts & Letters (N = 36) College of Business (N = 7) Education (N = 14) Health Science (N = 6) Science and Math (N = 29) University College (N = 4) Learning Styles (M/M) 33.0 44.4 14.3 28.6 33.3 31.0 25.0 Discovery Learning (M/M) 33.0 27.8 28.6 57.1 33.3 25.8 50.0 Extrinsic Motivation (M/M) 60.2 66.7 42.9 64.3 50.0 55.2 50.0 Multitasking (M/M) 40.6 40.5 28.6 35.7 50.0 51.6 0.00 Digital Natives (M/M) 43.8 55.6 14.3 35.7 66.7 40.0 50.0 Dual Coding (EP) 65.4 73.0 71.4 64.3 50.0 58.1 50.0 Summarisation (EP) 66.7 64.9 100 50.0 66.7 69.0 100.0 Retrieval Practice (EP) 94.4 89.2 100.0 92.9 100.0 96.8 100.0 Direct Instruction (EP) 97.2 97.3 100.0 92.9 100.0 96.7 100.0 Spacing (EP) 85.7 83.8 100.0 71.4 83.3 96.6 75.0 *(M/M) = myth/misconception *(EP) = effective practice Table 2. Correct Responses on Each Learning Concept Across All Faculty and By College   Pedagogical Knowledge by Academic Discipline/College For the third research question, we sought to answer whether there were differences in pedagogical knowledge regarding the 10 concepts according to academic field. In particular, we were interested in determining whether education faculty showed stronger pedagogical knowledge compared to faculty from other fields since pedagogy is the core content of education professors. An examination of the descriptive statistics for the mean scores by college (see Tables 1 and 2 above) revealed the mean scores of faculty from the various colleges were between 5.63 and 6.38 out of a possible 10. There was no college where faculty averaged more than 65% correct answers. Additionally, only the subset of faculty who chose not to identify their college scored lower in pedagogical knowledge than those from the College of Education. Faculty from all other colleges outperformed education faculty regarding their level of pedagogical knowledge.  To ascertain whether there were statistical differences in pedagogical knowledge according to academic field, faculty were grouped by college (e.g., Arts and Letters, Business, Education, etc.). ANOVA analyses were conducted with scores from the pedagogical knowledge scale as the dependent variable. For all ANOVA analyses, equal variances across groups were assumed, as the assumption of homogeneity of variance was not violated in any of the ANOVA models. Results found no statistically significant differences in pedagogical knowledge between faculty from different colleges overall, F (5, 93) = 0.253 p = .937, η2 = .013. Thus, while an examination of the descriptive statistics revealed that education faculty did not outperform faculty from other areas, inferential analysis indicated that all faculty from across the university scored similarly regarding pedagogical knowledge across all items.  To extend the analyses, we sought to determine whether there were differences in pedagogical knowledge according to academic field regarding either the myths and misconceptions or the effective practices. ANOVA analyses revealed that there were no differences by college in either the myths/misconceptions, F (5, 93) = 1.428, p = .221,  η2 = .071 or the effective practices, F (5, 93) = 1.700, p = .142, η2 =.084. Therefore, faculty from all colleges demonstrated similar levels of knowledge of both myths/misconceptions and effective practices. These results were surprising because faculty from the College of Education would be expected to score higher in pedagogical knowledge for both myths/misconceptions and effective practices since pedagogical knowledge is central to their field. Yet this was not the case. There would be no reason to expect faculty from the other colleges to perform better or worse than any other college since pedagogical knowledge generally does not fall directly within their field of study or expertise. Pedagogical Knowledge by Rank Also of interest was whether faculty performed differently regarding pedagogical knowledge according to academic rank, which may be considered a reflection of teaching experience. Respondents were grouped by rank (i.e., instructor, lecturer, assistant professor, associate professor, professor), and ANOVAs were conducted with scores on overall pedagogical knowledge, myths/misconceptions, and effective practices as dependent variables. Only full-time faculty were included in the sample. Significant main effects by rank were revealed for overall pedagogical knowledge, F (4,100) = 3.020, p = .021, η2 = .108, and for the myths/misconceptions, F (4, 100) = 2.836, p = .028, η2 = .102, but not for the effective practices, F (4, 100) = 1.455, p = .222, η2 = .055.  For overall pedagogical knowledge, LSD post hoc analyses identified that professors (p = .008), associate professors (p = .019), and lecturers (p = .044) scored significantly higher than assistant professors. Furthermore, professors scored significantly higher than lecturers (p = .038). The LSD post hoc analyses for myths/misconceptions revealed that professors were better able to correctly identify myths significantly more than assistant professors (p = .008) and instructors (p = .015).  Full and associate professors outperformed less experienced professors, potentially due to having more expertise and more years of experience in instruction on average. Descriptive statistics regarding rank may be found below in Tables 3, 4, and 5. Note that in Table 3 the possible score was between 0 and 10, whereas in Tables 4 and 5, the possible score was between 0 and 5. For myths and misconceptions, the scores for faculty at all ranks fell below the 50% mark (M = 2.5) except for professors, who scored just above that (M = 2.55). College N Mean Std. Dev Std. Error Full Professors 26 6.58 1.27 0.25 Associate Profs 29 6.41 2.13 0.40 Assistant Profs 28 5.39 1.47 0.28 Lecturers 18 6.39 1.42 0.33 Instructors 4 4.75 0.50 0.25 Total 105 6.11 1.68 0.16 Table 3. Pedagogical Knowledge scores by Rank Overall   College N Mean Std. Dev Std. Error Full Professors 26 2.55 0.50 0.10 Associate Profs 29 2.43 0.41 0.08 Assistant Profs 28 2.25 0.38 0.07 Lecturers 18 2.44 0.39 0.09 Instructors 4 2.00 0.16 0.08 Total 105 2.40 0.43 0.04 Table 4. Pedagogical Knowledge Scores by Rank for Myths and Misconceptions   College N Mean Std. Dev Std. Error Full Professors 26 3.01 0.32 0.06 Associate Profs 29 2.89 0.22 0.04 Assistant Profs 28 2.89 0.29 0.05 Lecturers 18 3.00 026 0.06 Instructors 4 3.10 0.38 0.19 Total 105 2.95 0.28 0.03 Table 5. Pedagogical Knowledge Scores by Rank for Effective Practices   Metacognitive Awareness of Pedagogical Knowledge Of particular interest was whether faculty demonstrated metacognitive awareness of their own levels of pedagogical knowledge. As noted in the method section above, metacognition, or metacognitive awareness, has typically been measured by assessing respondents’ level of confidence in their own knowledge or ability, then assessing them on an objective test of that knowledge or ability and conducting a correlational analysis to determine whether their self-beliefs correspond with actual knowledge or performances levels on the constructs. For this analysis, we conducted Pearson correlations utilising the faculty’s scores on the Confidence in Pedagogical Knowledge Scale and their scores on the Pedagogical Knowledge Items assessment to test for a relationship between the two.  When measuring the metacognitive awareness of pedagogy for all faculty who participated, a Pearson correlation revealed a weak non-significant negative correlation, r (107) = -.157, p = .105. Accurate metacognitive awareness is only shown when there is a significant positive correlation between one’s confidence or views of their own knowledge or ability and one’s actual levels in those areas. A negative correlation or no correlation (non-significant) would indicate poor metacognition in that the respondents’ views of their own knowledge or abilities do not correspond with their actual levels in the areas. That was the case across all faculty in the study. In regard to faculty in the College of Education, who we hypothesised would have greater pedagogical knowledge and awareness of their own levels of expertise in the area, we again found no correlation between their self-reported level of expertise and their actual level of pedagogical knowledge, r (14) = .003, p = .992. We also tested for differences in metacognitive awareness between faculty based on academic rank; however, no significant differences were found on this variable, with more experienced faculty showing no greater awareness in this regard than newer faculty.  Overall, faculty reported high confidence that their teaching was heavily influenced by their familiarity with effective teaching practices (81.2% agreement) and did not believe that other faculty members had better knowledge of those practices (87.7% agreement). While faculty did show recognition of effective practices, they also endorsed myths and misconceptions regarding pedagogy, wherein they scored the poorest.   5. Discussion The goals of this study were to assess pedagogical knowledge, as a function of both effective practices and misconceptions, of faculty at a large teaching-centred state university and to gauge their metacognitive awareness regarding their own instructional practices. In addition, we sought to discern whether these outcomes varied according to academic discipline, as represented by college, and by academic rank, which often can serve as a proxy for experience in higher education.  The data present a picture of faculty who tended to characterise all the pedagogical approaches they were presented with as being effective, regardless of whether those approaches were myths or misconceptions or were actually effective strategies. This resulted in a dynamic in which faculty correctly classified effective practices as being beneficial to learning but also incorrectly endorsed myths and misconceptions. This suggests that faculty at the university in this study are often incorrect in their assumptions regarding ineffective practices and mistakenly believe practices debunked by research benefit students. This finding is consistent with recent research indicating that educators continue to report holding beliefs in neuromyths despite a wealth of evidence to the contrary (Macdonald et al., 2017; Newton & Salvi, 2020; Rousseau, 2021).  Faculty answered the majority of the pedagogical items correctly, with most incorrect answers coming in the form of endorsing myths and misconceptions as effective practices. For example, two-thirds of faculty believed unguided discovery learning and learning styles to be beneficial to student learning, while more than half were incorrect about multitasking and digital natives. The only misconception that the majority of faculty correctly characterised was the use of extrinsic motivators, with most classifying rewards systems as ineffective long-term instructional strategies. For the effective practices, the vast majority of faculty correctly recognised the efficacy of direct instruction, retrieval practice, and spacing, with a smaller majority recognising the effectiveness of summarisation and dual coding. While faculty demonstrated an understanding that certain research-supported strategies are effective, many also believed several of the myths and misconceptions are beneficial. This may indicate that instructors are not successfully differentiating between effective and ineffective practices, a finding consistent with prior research (Rohrer & Pashler, 2010), suggesting that faculty may often unknowingly choose ineffective methods. If this is the case, then it is likely that many unproductive teaching methods may be used in college classrooms as well at the public-school level, where similar findings have emerged (Lethaby & Harries, 2016). The results were somewhat less clear regarding the pedagogical knowledge of faculty according to rank. We did not have specific expectations about how faculty at different ranks would perform. It was possible that higher ranking members may show stronger pedagogical knowledge due to having more experience in instruction, although considering that some lower ranked faculty may have been employed at other universities previously or held multiple postdoc positions prior, higher ranked faculty may not universally have been more experienced. Another possibility would be that newer faculty would be more familiar with learning science than longer tenured ones because they had more recently been exposed to the latest research when completing their doctorates. The former scenario appeared more likely to be the case than the latter, with full professors and associate professors outperforming assistant professors in overall pedagogical knowledge. While faculty of all ranks performed similarly regarding identifying effective instructional practices, full professors were less likely to endorse myths and misconceptions than assistant professors and lecturers. Thus, faculty across ranks appeared to hold similar views regarding effective practices. Still, the more newly hired or less experienced faculty were more likely to endorse myths and misconceptions than tenured faculty. One possibility based on these results is that newly minted Ph.D. holders are not being exposed to accurate learning science in their doctoral programmes and that myths and misconceptions are proliferating at that level. The most notable contributions of this study emerged through analyses resulting in non-significant findings. Surprisingly, College of Education faculty, whose academic discipline is entirely rooted in pedagogy, did not demonstrate better understanding of these research-based, well-established instructional concepts than faculty from other disciplines. With the exception of unguided discovery learning, education faculty were just as likely to endorse myths and misconceptions about learning and no more likely to recognise practices supported by well-established research. In fact, while the difference was not statistically significant, education faculty scored lower in pedagogical knowledge than the faculty from each of the other colleges.  Additionally, faculty across the university showed a lack of metacognitive awareness in regard to their pedagogical knowledge. The absence of a positive correlation between faculty members’ confidence in their knowledge of teaching and learning practices and their actual knowledge of those practices revealed a limited understanding of their own knowledge in the area. In short, being confident in their knowledge of pedagogy was not related to actual high levels of knowledge on the topic. This dynamic was true of faculty from the College of Education as well. Individuals with low levels of knowledge yet high levels of confidence in that knowledge are unlikely to change their views and seek ways to improve their knowledge or performance. In this case, such faculty would be unlikely to improve upon or learn about new instructional techniques over time.  The dichotomy between actual knowledge and confidence in one’s own knowledge was underscored by the preponderance of faculty, nearly 88%, who believed that others at the university did not know more about pedagogical approaches than themselves. In one respect, this demonstrates high self-efficacy in their teaching practices, but it may also be cause for concern. Considering that the vast majority of faculty participants were trained in and taught in fields in which learning science was not central to their discipline, a belief that there were no others at the university with more expertise in pedagogy could lead to circumstances when faculty are disinterested in pursuing more effective methods or learning about emerging research on teaching practices. This situation may mirror that of K-12 education when public school teachers may receive limited instruction in learning science and ultimately default to relying on anecdotal experiences to guide their practice.  This particular dynamic, high levels of confidence paired with low levels of knowledge, represents the well-established Dunning-Kruger effect (Kruger & Dunning, 1999). The Dunning-Kruger effect most commonly appears in respondents with the lowest levels of knowledge on a subject when those with little expertise overclaim and believe their knowledge to be high in a field they have limited familiarity with (Atir, et al., 2015; Pennycook, et al., 2017). One novel contribution of the present study is that the participants cannot be viewed as having the lowest levels of expertise yet still demonstrated what can be consider the Dunning-Kruger effect because confidence far exceeded actual knowledge levels. Faculty almost universally held terminal degrees in an academic field, most likely from national research universities. Therefore, this suggests that the effect does not apply only to those with the lowest levels of knowledge or those whose knowledge is outside their field. In this case, all participants had relatively high levels of experience and some knowledge of the learning sciences. However, it appears that learning science on pedagogy is specialised enough that even those with extensive experience teaching may have limited levels of knowledge of it while at the same time having confidence with their familiarity with the subject. It is important to note that there is no reason to conclude that the issues revealed here are unique or specific to the university that served as the focus of the study, as the participants were not educated at this institution. The vast majority of faculty at the university received their Ph.Ds from what are considered to be high-level research universities, categorised under the Carnegie classification system as “Doctoral Universities: Very High Research Activity”. Thus, it is likely that their cohort members educated at the same universities who secured positions teaching at other institutions would hold similar views. Considering this situation, these results may indicate much wider issues in the preparation of university faculty for teaching purposes. Limitations One limitation of the present study was that the assessment designed to measure knowledge of pedagogical practices was restricted to ten items. There were two primary reasons for this. First, as an extension of a much larger survey instrument, we were limited in the number of items we were able to introduce. It can certainly be argued that ten items are not enough to capture a range of pedagogical practices, without enough breadth to ascertain the full scope of possibilities, though using a mixture of effective practices and myths/misconceptions allowed for more nuanced analyses of pedagogical knowledge.  The second reason for the limited number of pedagogical items was that it was somewhat challenging to identify approaches for which there was an abundance of research evidence to the extent that we could consider the science to be settled and which were not confined to certain developmental levels or content areas. For instance, a practice such as interleaving is well-supported across age ranges, but research has most commonly linked it to math instruction (Rohrer & Pashler, 2010), and it is less clear how it may apply to language arts or history instruction. Likewise, practices like the use of manipulatives have been shown to be effective, but mostly for young children and in somewhat narrow contexts (Willingham, 2017). For these reasons, the measure of pedagogical knowledge was limited to a short list of practices.  Another potential limitation was the question of how settled is settled? Unlike the hard sciences, research in social sciences rarely approaches that level of consensus and there may be continued debate and contradictory findings for decades. Therefore, we chose concepts for which we determined there was the greatest amount of consensus among researchers and the greatest abundance of robust empirical evidence. We chose to draw upon concepts such as those compiled by organisations like the American Psychological Association (APA, 2015) or a compilation of seminal studies in educational psychology (Kirschner & Hendrick, 2020), each of which is supported by hundreds of studies. Therefore, despite academics who may dispute our choices of concepts, we are confident in the validity and reliability of the ones we chose to include, though the criteria limited our options. Additionally, the reliability of the Confidence in Pedagogical Knowledge Scale was not as strong as we would have liked (α = .79). Reliability was further tested by removing each of the five items to ascertain whether the scale proved to be more reliable in any combination as a 4-item scale, but the highest reliability was achieved by including all five items. Future researchers could improve upon this scale by introducing stronger items or replacing ones from the current scale. Nonetheless, while ideally the reliability of the present scale would have been stronger, it was acceptable. A final limitation was that the data were collected from just one large state university and the sample was restricted to 107 faculty members. But as noted above, these faculty members were trained at a wide range of different universities, the majority of them national research universities, so we do not view the results as being limited to a reflection of the one institution that was the focus of the study. However, we recommend that future researchers extend their samples to several institutions at a variety of levels, such as community colleges, comprehensive, and research universities, as well as teachers in K-12 settings. Implications In sum, the data suggest a situation in which faculty do not have a strong understanding of which pedagogical approaches are effective in contrast to those which are not according to learning science, yet they feel relatively confident that they do. Faculty were able to correctly identify effective practices but could not distinguish those from myths and misconceptions, and the widespread endorsement of certain misconceptions such as learning styles and discovery learning indicate that perhaps faculty may not be employing the most efficacious teaching methods. The limited levels of metacognitive awareness further suggests that such faculty members may be unlikely to search out better methods if they are unaware that a variety of concepts they endorse do not benefit learning. It is understandable that those faculty members from the colleges of Arts and Letters, Business, Math, and Sciences would not be as aware of the cognitive science supporting different learning strategies, as their doctoral preparation was focused on promoting the highest levels of knowledge and research connected to their specific discipline. However, the issue is more complex for those in the College of Education. The remedies for faculty from colleges besides Education are straightforward. University programmes that prepare Ph.D. candidates to be college faculty can ensure that they include courses structured to familiarise future faculty with recent research on human learning and how cognitive science informs teaching practices in real-world applications. Additionally, most universities have centres for teaching and learning designed to serve as professional development for faculty, particularly in the area of pedagogy. These centres should endeavour to provide the most recent and accurate information and avoid endorsing concepts shown by learning science to be misconceptions. These two options are often instituted at many universities. Our results suggest that they must be done better. The issues regarding remedies for colleges of education are more daunting. Pedagogy is the central content of these programmes. These colleges of education prepare K-12 teachers, K-12 administrators, district and state-level administrators, and very often university administrators who receive their doctorates in leadership programmes housed in colleges of education. If education professors are not aware of the most well-established teaching and learning methods, then their students will also not become aware of them, including K-12 teachers and administrators at each level. And because nearly all of the education faculty at the university in this study received their Ph.D.s from a range of R-1 institutions, it suggests that the issue may be widespread. State and national accrediting bodies may not sufficiently account for professors’ level of pedagogical knowledge, which may partially explain why misinformation about learning such as learning styles is included in teacher licensure materials in the majority of states (Furey, 2020). Questions have arisen about the efficacy of training provided by colleges of education (Asher, 2018; Goldhaber, 2019), and our results appear to underscore those concerns. These results, if they were replicated in further studies, should compel those in academia to rethink and perhaps overhaul the foundation of colleges of education, in addition to making substantial alterations to accrediting bodies for colleges of education and state boards of education in the U.S. These organisations may not be meeting their primary responsibility of ensuring that their graduates adequately understand teaching and learning practices. It should not be sufficient to simply train teachers to function within a system. They should be taught about what works to enhance student learning in order to become better teachers, but that is not possible if those teaching them at the university level do not have a full understanding of fundamental principles of learning. A first step in the process would be for universities and colleges to acknowledge the potential limitation and modify curricula to ensure that adequate attention was devoted to coursework on how human learning occurs. This would necessitate employing personnel with the necessary expertise to teach such courses. This may entail universities hiring instructors with specific skill sets that may not be prioritised at the moment in order to advance that particular knowledgebase. For existing faculty who may not have had the benefit of coursework in the learning sciences themselves, professional development can be offered. This may consist of learning modules where faculty read and discuss research pertaining to learning outcomes and instructional strategies from fields such as cognitive psychology, neuroscience, educational psychology, and cognitive science. Ideally, these would incorporate the very methods being studied and provide videos and demonstrations of the strategies being used in classroom settings. This is a general overview of initial steps that may be taken, but a key point is that this training should ultimately be introduced to faculty, to those in graduate courses who are likely to one day be faculty or involved in higher education, and to those who are preparing for roles in K-12 education.   6. Conclusion This study, along with a growing body of research, suggests that instructors are not currently being adequately trained in the cognitive science that informs teaching and learning. If myths and misconceptions about learning continue, those instructors will be unlikely to optimise student learning. By acquiring a more comprehensive understanding of learning science, university instructors will have the opportunity to employ teaching practices that have been shown to enhance cognition, such as methods to increase retention of information, problem-solving skills, or procedural knowledge. There is little doubt that their students would benefit from the use of research-based practices. This is especially true of faculty in colleges of education who prepare K-12 teachers and administrators at a variety of levels because an understanding of such approaches could then be transferred to those who would put them to use in classrooms with younger learners. We recommend that universities prioritise an emphasis on learning science to ensure that the candidates they train for teaching positions are aware of effective teaching and learning practices and can distinguish between those and ineffective ones, which should ultimately enhance educational outcomes for all involved.   About the Authors Joshua A. Cuevas ORCID ID: 0000-0003-3237-6670 University of North Georgia [email protected] Bryan L. Dawson University of North Georgia [email protected] Gina Childers Texas Tech University [email protected]   References Alloway, T. P., Gathercole, S. E., & Pickering, S. J. (2006). Verbal and visuospatial short-term and working memory in children: Are they separable? Child Development, 77(6), pp. 1698–1716. https://doi.org/10.1111/j.1467-8624.2006.00968.x. American Psychological Association, Coalition for Psychology in Schools and Education. (2015). Top 20 principles from psychology for preK–12 teaching and learning. Retrieved from www.apa.org/ed/schools/cpse/top-twenty-principles.pdf. Anderson, M. C., & Thiede, K. W. (2008). Why do delayed summaries improve metacomprehension accuracy? Acta Psychologica, 128, pp. 110-118. https://doi.org/10.1016/j.actpsy.2007.10.006. Asher, L. (2018, April 8). How ed schools became a menace: They trained an army of bureaucrats who are pushing the academy toward ideological fundamentalism. The Chronicle of Higher Education. https://www.chronicle.com/article/How-Ed-Schools-Became-a-Menace/243062. Atir, S., Rosenweig, E., and Dunning, D. (2015). When knowledge knows no bounds: Self-perceived expertise predicts claims of impossible knowledge. Psychological Science, 26(8), pp. 1295 – 1303. https://doi.org/10.1177/0956797615588195. Bangert–Drowns, R. L., Kulik, J. A., & Kulik, C. L. C. (1991). Effects of frequent classroom testing. Journal of Educational Research, 85, pp. 89–99. https://doi.org/10.1080/00220671.1991.10702818. Bennett, S., Maton, K., & Kervin, L. (2008). The ‘digital natives’ debate: A critical review of the evidence. British Journal of Educational Technology, 39(5), pp. 775-786). https://doi.org/10.1111/j.1467-8535.2007.00793.x. Bogaerds-Hazenberg, S., Evers-Vermeul, J., & van den Bergh, H. (2020). A meta-analysis on the effects of text structure instruction on reading comprehension in the upper elementary grades. Reading Research Quarterly, 56(3), pp. 435-462. https://doi.org/10.1002/rrq.311. Brown, A. L. & Day, J.D. (1983). Macrorules for summarizing texts: The development of expertise. Journal of Verbal Learning and Verbal Behavior, 22, pp. 1-14. https://doi.org/10.1016/S0022-5371(83)80002-4. Carpenter, S. K., Cepeda, N. J., Rohrer, D., Kang, S. H. K., & Pashler, H. (2012). Using spacing to enhance diverse forms of learning: Review of recent research and implications for instruction. Educational Psychology Review, 24(3), pp. 369–378. https://doi.org/10.1007/s10648-012-9205-z. Carr, E., & Ogle, D. (1987). K-W-L Plus: A strategy for comprehension and summarization. Journal of Reading, 30(7), pp. 626-631. https://eric.ed.gov/?id=EJ350560. Cepeda, N. J., Coburn, N., Rohrer, D., Wixted, J. T., Mozer, M. C., & Pashler, H. (2009). Optimizing distributed practice: Theoretical analysis and practical implications. Experimental Psychology, 56(4), pp. 236–246. https://doi.org/10.1027/1618-3169.56.4.236. Chang, K., Sung, Y. & Chen, I. (2002). The effect of concept mapping to enhance text comprehension and summarization. The Journal of Experimental Education, 71(1), pp. 5-23. https://doi.org/10.1080/00220970209602054. Clark, J., Kirschner, P. A., Sweller, J. (2012). Putting students on a path to learning: The case for fully guided instruction. American Educator, 36(1), pp. 6-11. https://eric.ed.gov/?id=EJ971752. Clark, J. M., & Paivio, A. (1991). Dual coding theory and education. Educational Psychology Review, 3(3), pp. 149-210.  https://doi.org/10.1007/BF01320076. Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning & Skills Research Centre, London.  Cuevas, J. A. (2015). Is learning styles-based instruction effective? A comprehensive analysis of recent research on learning styles. Theory and Research in Education, 13(3), pp. 308-333. https://doi.org/10.1177/1477878515606621. Cuevas, J. A. (2016a). An analysis of current evidence supporting two alternate learning models: Learning styles and dual coding. Journal of Educational Sciences & Psychology, 6(1), pp. 1-13. https://www.researchgate.net/publication/301692526. Cuevas, J. A. (2016b). Cognitive psychology’s case for teaching higher order thinking. Professional Educator, 15(4), pp. 4-7. https://www.academia.edu/28947876. Cuevas, J. A. (2017). Visual and auditory learning: Differentiating instruction via sensory modality and its effects on memory. In Student Achievement: Perspectives, Assessment and Improvement Strategies (pp. 29 – 54). Nova Science Publishers. ISBN-13: 978-1536102055. Cuevas, J. A. (2019). Addressing the crisis in education: External threats, embracing cognitive science, and the need for a more engaged citizenry. In Nata, R.V. (Ed.), Progress in Education (pp. 1 – 38). (Vol. 55). Nova Science Publishers. ISBN: 978-1-53614-551-9. Cuevas, J. A., Childers, G., & Dawson, B. L. (2023). A rationale for promoting cognitive science in teacher education: Deconstructing prevailing learning myths and advancing research-based practices. Trends in Neuroscience and Education. https://doi.org/10.1016/j.tine.2023.100209. Cuevas, J. A., & Dawson, B. L. (2018). A test of two alternative cognitive processing models: Learning styles and dual coding. Theory and Research in Education, 16(1), pp. 40-64.  https://doi.org/10.1177/1477878517731450. Dawson, B. L., & Cuevas, J. A. (2019). An assessment of intergroup dynamics at a multi-campus university: One university, two cultures. Studies in Higher Education, 45(6), pp. 1047 – 1063. https://doi.org/10.1080/03075079.2019.1628198. Day, J. (1986). Teaching summarization skills: Influences of student ability level and strategy difficulty. Cognition and Instruction, 3(3), pp. 193-210. https://doi.org/10.1207/s1532690xci0303_3. Deci, E., Vallerand, R., Pelletier, L., & Ryan, R. (1991). Motivation and education: The self-determination perspective. Educational Psychologist, 26(3&4), pp. 325-346. https://doi.org/10.1080/00461520.1991.9653137. Deci, E., Koestner, R., & Ryan, R. (2001). Extrinsic rewards and intrinsic motivation in education: Reconsidered once again. Review of Educational Research, 7(1), pp. 1-27. https://doi.org/10.3102/00346543071001001. Demirbilek, M., & Talan, T. (2017). The effect of social media multitasking on classroom performance. Active Learning in Higher Education, 19(2), pp. 117-129. https://doi.org/10.1177/1469787417721382. Di Virgilio, G. & Clarke, S. (1997). Direct interhemispheric visual input to human speech areas. Human Brain Mapping, 5, pp. 347-354. https://doi.org/10.1002/(SICI)1097-0193(1997)5:5<347::AID-HBM3>3.0.CO;2-3. Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), pp. 4-58. https://doi.org/10.1177/1529100612453266. Dunlosky, J., Rawson, K. A., & Middleton, E. L. (2005). What constrains the accuracy of metacomprehension judgments? Testing the transfer-appropriate-monitoring and accessibility hypotheses. Journal of Memory and Language, 52, pp. 551-565. https://doi.org/10.1016/j.jml.2005.01.011. Fiebach, C. J. & Friederici, A. D. (2003). Processing concrete words: fMRI evidence against a specific right-hemisphere involvement. Neuropsychologia, 42(1), pp. 62 -70. https://doi.org/10.1016/S0028-3932(03)00145-3.  Furey, W. (2020). The stubborn myth of “learning styles”: State teacher-license prep materials peddle a debunked theory. Education Next, 20(3), pp. 8-12. https://www.educationnext.org/. Furtak, E. M., Seidel, T., Iverson, H., & Briggs, D. C. (2012). Experimental and quasi-experimental studies of inquiry-based science teaching: A meta-analysis. Review of Educational Research, 82, pp. 300 – 329. https://doi.org/10.3102/0034654312457206. Gill, A., Trask-Kerr, K., & Vella-Brodrick, D. (2021). Systematic review of adolescent conceptions of success: Implications for wellbeing and positive education. Educational Psychology Review, 33, pp. 1553-1582. https://doi.org/10.1007/s10648-021-09605-w. Goldhaber, D. (2019). Evidence-based teacher preparation: Policy context and what we know. Journal of Teacher Education, 70(2), pp. 90–101. https://doi.org/10.1177/0022487118800712. Hagaman, J. L., Casey, K. J., Reid, R. (2016). Paraphrasing strategy instruction for struggling readers. Preventing School Failure: Alternative Education for Children and Youth, 60, pp. 43–52. http://dx.doi.org/10.1080/1045988X.2014.966802. Higher Education Research Institute. (2015, October). HERI research brief. https://www.heri.ucla.edu/briefs/DLE/DLE-2015-Brief.pdf. Hite, R., Jones, M.G., Childers, G., Chesnutt, K., Corin, E., & Pereyra, M. (2017). Pre-service and in-service science teachers’ technological acceptance of 3D, haptic-enabled virtual reality instructional technology. Electronic Journal of Science Education, 23(1), pp. 1-34. https://eric.ed.gov/?id=EJ1203195. Hodes, C. L. (1998). Understanding visual literacy through visual informational processing. Journal of Visual Literacy, 18(2), pp. 131-136. https://doi.org/10.1080/23796529.1998.11674534. Jones, C., Ramanau, R., Cross, S., & Healing, G. (2010). Net generation or digital natives: Is there a distinct new generation entering university? Computers & Education, 54(3), pp. 722-732. https://doi.org/10.1016/j.compedu.2009.09.022. Junco, R. & Cotten, S. (2011). No A 4 U: The relationship between multitasking and academic performance. Computers & Education, 59(2), pp. 505-514. https://doi.org/10.1016/j.compedu.2011.12.023. Just, M. A., & Carpenter, P. A. (1992.) A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 99, pp. 122-149. https://doi.org/10.1037/0033-295X.99.1.122. Karpicke, J. D., & Grimaldi, P. J. (2012). Retrieval-based learning: A perspective for enhancing meaningful learning. Educational Psychology Review, 24(3), pp. 401–418. https://doi.org/10.1007/s10648-012-9202-2. Kintsch, W., & van Dijk, T. (1978). Toward a model of text comprehension and production. Psychological Review, 85, pp. 363-394. https://doi.org/10.1037/0033-295X.85.5.363. Kirschner, P. A., & De Bruyckere, P. (2017). The myths of the digital native and the multitasker. Teaching and Teacher Education, 67, pp. 135-142. https://doi.org/10.1016/j.tate.2017.06.001. Kirschner, P. A. & Hendrick, C. (2020). How learning happens: Seminal works in educational psychology and what they mean in practice. New York, NY: Rutledge. Kirschner, P. A., Sweller, J., Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist. 46(2), pp. 75-86. https://doi.org/10.1207/s15326985ep4102_1. Kirschner, P. A. & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education, Educational Psychologist, 48(3), pp. 169-183. https://doi.org/10.1080/00461520.2013.804395. Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), pp. 1121–1134. https://doi.org/10.1037/0022-3514.77.6.1121. Larsen, D. P. (2018). Planning education for long-term retention: The cognitive science and implementation of retrieval practice. Seminars in Neurology, 38(4), pp. 449–456. https://doi.org/10.1055/s-0038-1666983. Latimier, A., Peyre, H., Ramus, F. (2021). A meta-analytic review of the benefit of spacing out retrieval practice episodes on retention. Educational Psychology Review, 33, pp. 959-987. https://doi.org/10.1007/s10648-020-09572-8. Lee, F. J., & Taatgen, L. (2002). ‘Multitasking as Skill Acquisition’, CogSci’02: Proceedings of the Cognitive Science Society, August 2002. Leopold, C., & Leutner, D. (2012). Science text comprehension: Drawing, main idea selection, and summarizing as learning strategies. Learning and Instruction, 22, pp. 16–26. https://doi.org/10.1016/j.learninstruc.2011.05.005. Lethaby, C., & Harries, P. (2016). Learning styles and teacher training: Are we perpetuating neuromyths? ELT Journal, 70(1), pp. 16–27. https://doi.org/10.1093/elt/ccv051. Lorch, R., & Pugzles Lorch, E. (1996). Effects of headings on text recall and summarization. Contemporary Educational Psychology, 21(3), pp. 261-278. https://doi.org/10.1006/ceps.1996.0022. Macdonald, K., Germine, L., Anderson, A., Christodoulou, J., & McGrath, L. M. (2017). Dispelling the myth: Training in education or neuroscience decreases but does not eliminate beliefs in neuromyths. Frontiers in Psychology. 8:1314. https://doi.org/10.3389/fpsyg.2017.01314. Margaryan, A., Littlejohn, A., & Vojt, G. (2011). Are digital natives a myth or reality? University students’ use of digital technologies. Computers & Education, 56(2), pp. 429-440. https://doi.org/10.1016/j.compedu.2010.09.004. Mayer, R. (2004). Should there be a three-strikes rule against pure discovery learning? The case for guided methods of instruction. American Psychologist, 59(1), pp. 14-19. https://doi.org/10.1037/0003-066X.59.1.14. McDaniel, M. A., Agarwal, P. K., Huelser, B. J., McDermott, K. B., & Roediger, H. L. (2011). Test-enhanced learning in a middle school science classroom: The effects of quiz frequency and placement. Journal of Educational Psychology, 103(2), pp. 399–414. https://doi.org/10.1037/a0021782. Mercimek B., Akbulut, Y., Dönmez, O., & Sak, U. (2020). Multitasking impairs learning from multimedia across gifted and non-gifted students. Educational Technology, Research and Development, 68(3), pp. 995-1016. https://doi.org/10.1007/s11423-019-09717-9. Nancekivell, S. E., Sun, X., Gelman, S. A., & Shah, P. (2021). A slippery myth: How learning style beliefs shape reasoning about multimodal instruction and related scientific evidence. Cognitive Science. https://doi.org/10.1111/cogs.13047. National Research Council. (2018). How people learn II: Learners, contexts, and cultures. Washington, DC: National Academy Press. Newton, P.M., & Salvi, A. (2020). How common is belief in the learning styles neuromyth, and does it matter? A pragmatic systematic review. Frontiers in Education. 5:602451.  https://doi.org/10.3389/feduc.2020.602451. Pashler, H., McDaniel, M., Rohrer, D., Bjork, R. (2009). Learning styles: Concepts and evidence. Psychological Science in the Public Interest 9: pp. 105–119. https://doi.org/10.1111/j.1539-6053.2009.01038.x. Paivio, A. (1969). Mental imagery in associative learning and memory. Psychological Review, 76, pp. 241-263. Paivio, A. (1986). Mental representations. New York, NY: Oxford University Press. Pennycook, G., & Rand, D. G. (2019). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of Personality, 88(2). pp. 185-200. https://doi.org/10.1111/jopy.12476. Pennycook, G., Ross, R. M., Koehler, D. J., & Fugelsang, J. A. (2017). Dunning-Kruger effects in reasoning: Theoretical implications of the failure to recognize incompetence. Psychonomic Bulletin and Review 24(6). pp. 1774-1784. https://doi.org/10.3758/s13423-017-1242-7. Perin, D., Lauterbach, M., Raufman, J., & Kalamkarian, H. S. (2017). Text-based writing of low-skilled postsecondary students: Relation to comprehension, self-efficacy and teacher judgments. Reading and Writing, 30(4), pp. 887-915. https://doi.org/10.1007/s11145-016-9706-0. Prensky, M. (2001). Digital natives, digital immigrants part 1. On the Horizon, 9(5), pp. 1-6. http://dx.doi.org/10.1108/10748120110424816. Prensky, M. (2012). Digital natives to digital wisdom: Hopeful essays for 21st century learning. Thousand Oaks, CA: Corwin. Riener, C. & Willingham, D. (2010). The myth of learning styles. Change 42(5): pp. 32-35.  Roediger, H. L., & Butler, A. C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1), pp. 20-27. https://doi.org/10.1016/j.tics.2010.09.003.  Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning taking memory tests improves long-term retention. Psychological Science, 17(3), pp. 249-255. https://doi.org/10.1111/j.1467-9280.2006.01693.x. Rogers, J., & Cheung, A. (2020). Pre-service teacher education may perpetuate myths about teaching and learning. Journal of Education for Teaching, 46(3), pp. 417-420. https://doi.org/10.1080/02607476.2020.1766835. Rogowsky, B.A., Calhoun, B.M., & Tallal, P. (2015). Matching learning style to instructional method: Effects on comprehension. Journal of Educational Psychology 107(1): pp. 64–78. https://doi.org/10.1037/a0037478. Rogowsky, B.A., Calhoun, B.M., & Tallal, P. (2020). Providing instruction based on students’ learning style preferences does not improve learning. Frontiers in Psychology. 11:164. https://doi.org/10.3389/fpsyg.2020.00164. Rohrer, D., & Pashler, H. (2010). Recent research on human learning challenges conventional instructional strategies. Educational Researcher, 39(5), pp. 406-412.  https://doi.org/10.3102/0013189X1037477. Rohrer, D. & Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education 46(7): pp. 634–635. https://eric.ed.gov/?id=ED535732. Rosenshine, B. V. (2012). Principles of instruction: Research-based strategies that all teachers should know. American Educator, 36(1), pp. 12-19. https://eric.ed.gov/?id=EJ971753. Rousseau, L. (2021). Interventions to dispel neuromyths in educational settings—A review. Front. Psychol. 12:719692. https://doi.org/10.3389/fpsyg.2021.719692. Ryan, R., & Deci, E. (2000). Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemporary Educational Psychology, 25, pp. 54-67. https://doi.org/10.1006/ceps.1999.1020. Ryan, R. M., & Deci, E. L. (2009). Promoting self-determined school engagement: Motivation, learning, and well-being. In K. R. Wenzel & A. Wigfield (Eds.), Handbook of motivation at school (pp. 171–195). Routledge/Taylor & Francis Group. Rowland, C. A. (2014). The effect of testing versus restudy on retention: A meta-analytic review of the testing effect. Psychological Bulletin, 140(6), pp. 1432-1463. https://doi.org/10.1037/a0037559. Sadoski, M., Goetz, E.T., & Avila E. (1995). Concreteness effects in text recall: Dual coding or context availability? Reading Research Quarterly 30(2): pp. 278-288. https://doi.org/10.2307/748038. Sana, F., Weston, T., & Cepeda, N. (2012). Laptop multitasking hinders classroom learning for both users and nearby peers. Computers & Education, 62, pp. 24-31. https://doi.org/10.1016/j.compedu.2012.10.003. Scott, C. (2010). The enduring appeal of ‘learning styles’. Australian Journal of Education 54(1): pp. 5-17. https://doi.org/10.1177/000494411005400102. Seabrook, R., Brown, G. D. A., & Solity, J. E. (2005). Distributed and massed practice: From laboratory to classroom. Applied Cognitive Psychology, 19(1), pp. 107-122. https://doi.org/10.1002/acp.1066. Sharps, M. J., & Price, J. L. (1992). Auditory imagery and free recall. The Journal of General Psychology, 119(1), pp. 81-87. https://doi.org/10.1080/00221309.1992.9921160. Shelton, A., Lemons, C., & Wexler, J. (2020). Supporting main idea identification and text summarization in middle school co-taught classes. Intervention in School and Clinic, 56(4), pp. 217-223. https://doi.org/10.1177/1053451220944380. Smith, J., Skrbis, Z., & Western, M. (2012). Beneath the ‘Digital Native’ myth: Understanding young Australians’ online time use. Journal of Sociology, 49(1), pp. 97-118. https://doi.org/10.1177/1440783311434856. Solis, M., Ciullo, S., Vaughn, S., Pyle, N., Hassaram, B., & Leroux, A. (2012). Reading comprehension interventions for middle school students with learning disabilities: A synthesis of 30 years of research. Journal of Learning Disabilities, 45(4), pp. 327-340. https://doi.org/10.1177/0022219411402691. Stevens, E. A., Park, S., Vaughn, S. (2019). A review of summarizing and main idea interventions for struggling readers in grades 3 through 12: 1978-2016. Remedial and Special Education, 40, pp. 131-149. https://doi.org/10.1177/0741932517749940. Stockard, J., Wood, T.W., Coughlin, C., & Khoury, C. R. (2018). The effectiveness of direct instruction curricula: A meta-analysis of a half century of research. Review of Educational Research. 88(4), pp. 479-507. https://journals.sagepub.com/doi/10.3102/0034654317751919. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), pp. 257-285. https://doi.org/10.1207/s15516709cog1202_4. Sweller, J. (2011). Human cognitive architecture: Why some instructional procedures work and others do not. In K. Harris, S. Graham, & T. Urdan (Eds.), APA Educational Psychology Handbook (Vol. 1). Washington, DC: American Psychological Association. Sweller, J. (2016). Working memory, long-term memory, and instructional design. Journal of Applied Research in Memory and Cognition, 5. pp. 360-367. https://doi.org/10.1016/j.jarmac.2015.12.002. Sweller, J., van Merriënboer, J. J. G., & Paas, F. (1998). Cognitive architecture and instructional design. Educational Review, 10. pp. 251-296. https://doi.org/10.1023/A:1022193728205. Thiede, K. W., Anderson, M., & Therriault, D. (2003). Accuracy of metacognitive monitoring affects learning of texts. Journal of Educational Psychology, 95, pp. 66-73. https://doi.org/10.1037/0022-0663.95.1.66. Torrijos-Muelas, M., González-Víllora, S., & Bodoque-Osma, A. R. (2021). The persistence of neuromyths in the educational settings: A systematic review. Front. Psychol. 11:591923. https://doi.org/10.3389/fpsyg.2020.591923. Vasconcellos, D., Parker, P. D., Hilland, T., Cinelli, R., Owen, K. B., Kapsal, N., Lee, J., Antczak, D., Ntoumanis, N., Ryan, R. M., & Lonsdale, C. (2020). Self-determination theory applied to physical education: A systematic review and meta-analysis. Journal of Educational Psychology, 112(7), pp. 1444-1469. https://doi.org/10.1037/edu0000420. Wang, S., Hsu, H., Campbell, T., Coster, D., & Longhurst, M. (2014). An investigation of middle school science teachers and students use of technology inside and outside of classrooms: Considering whether digital natives are more technology savvy than their teachers. Educational Technology Research and Development, 62(6), pp. 637-662. https://doi.org/10.1007/s11423-014-9355-4. Wang, Z. & Tchernev, J. (2012). The “myth” of media multitasking: Reciprocal dynamics of media multitasking, personal needs, and gratifications. Journal of Communication, 62, pp. 493-513. https://doi.org/10.1111/j.1460-2466.2012.01641.x. Welcome S.E., Paivio, A., McRae, K., & Joanisse, M. F. (2011). An electrophysiological study of task demands on concreteness effects: Evidence for dual coding theory. Experimental Brain Research 212(3): pp. 347-358. https://doi.org/10.1007/s00221-011-2734-8. Westby, C., Culatta, B., Lawrence, B., & Hall-Kenyon, K. (2010). Summarizing expository texts. Topics in Language Disorders, 30(4), 275287. https://eric.ed.gov/?id=EJ906737. Willingham, D. (2017). Ask the cognitive scientist: Do manipulatives help students learn? American Educator, 41(3), 25-40. https://www.aft.org/ae/fall2017/willingham. Willingham, D. (2019). Ask the Cognitive Scientist: Should Teachers Know the Basic Science of How Children Learn? American Educator, 43(2), pp. 30-36. https://www.aft.org/ae/summer2019/willingham. Wood, E., & Zivcakova, L. (2015). Multitasking in educational settings. In L. D. Rosen, N. A. Cheever, & M. Carrier (Eds.), The Wiley handbook of psychology, technology and society (pp. 404-419). Hoboken, NJ: John Wiley & Sons, Inc. Wooten, J. O., & Cuevas, J. A. (2024). The effects of dual coding theory on social studies vocabulary and comprehension in a 5th grade classroom. International Journal on Social and Education Sciences (IJonSES), 6(4), pp. 673-691. https://doi.org/10.46328/ijonses.696. Zhang, W. (2015). Learning variables, in-class laptop multitasking and academic performance: A path analysis. Computers & Education, 81, pp. 82-88. https://doi.org/10.1016/j.compedu.2014.09.012.   Appendix A 10 Pedagogy Knowledge Items Myths: • Learning styles: When classroom instruction is designed to appeal to students’ individual learning styles, students are more likely to learn more. (negatively coded/false) • Discovery learning: Students learn best if they discover information for themselves through activities with minimal guidance when they can randomly experiment on their own. (negatively coded/false) • Extrinsic Motivation: Students’ long-term learning outcomes are likely to be better if teachers and professors stimulate extrinsic motivation through things such as rewards. (negatively coded/false) • Multitasking: Incorporating instruction that involves students in multitasking activities, such as when they think about multiple concepts at once, leads to better learning outcomes. (negatively coded/false) • Digital natives: Students learn better when digital tools are incorporated within instruction practice because students today are naturally more adept at technology due to having used it from such a young age. (negatively coded/false) Effective Approaches: • Dual coding/Imagery for text: It is generally true that students’ memory of lesson content tends to be stronger if visuals and images are used to supplement class lectures, discussions, and readings. (true) • Summarisation: When students summarise content by reducing information into concise, essential ideas, it helps build students’ knowledge and strengthen skills (true) • Practice testing: Quizzes and practice tests using open-ended questions tend to boost learning even if students do not do well on them. (true) • Direct instruction: Direct instruction tends to be an ineffective method for teaching content to students. (negatively coded/false) • Spacing: Students tend to learn more when instruction is delivered via focused, intensive sessions delivered over a brief time period rather than when information is spread out and revisited over longer time spans. (negatively coded/false)   Appendix B 5 metacognition Items (Cronbach’s alpha = .787) • My teaching is heavily influenced by a strong familiarity with the research on how students learn. • I am confident that I am knowledgeable of the best practices to enhance student learning. • I am confident that I utilise the best practices to enhance student learning. • I feel less familiar with teaching strategies and best practices compared to my colleagues. (negatively coded) • I am not always confident that my knowledge of pedagogy and how students learn is strong as it could be. (negatively coded)   Declarations and Compliance with Ethical Standards Ethics Approval: All procedures performed were in accordance with established ethical standards and were approved by the University of North Georgia Institutional Review Board. Informed Consent:  Informed consent was obtained from all required participants included in the study. Competing Interests: This research was not granted-related. The authors declare that they have no conflict of interest. Funding: This research was not funded. Data Availability: The data associated with this study are owned by the University of North Georgia. Interested parties may contact the first or second author regarding availability and access, and requests will be considered on an individual basis according to institutional guidelines. Anonymised data output sheets are available by contacting the first or second author of the study.   Gallery
  • Link Tags

    17
    • EditURI
      https://researchandeducation.ro/xmlrpc.php?rsd
    • alternate
      https://researchandeducation.ro/feed
    • alternate
      https://researchandeducation.ro/comments/feed
    • apple-touch-icon
      https://researchandeducation.ro/wp-content/uploads/2022/06/cropped-Red_icon-180x180.png
    • dns-prefetch
      //fonts.googleapis.com

Links

110