The Truth Behind The Politicized Attacks On The California Math Framework: Keep Mathematics As Currency/Status For The Privileged

Sunil Singh
21 min readApr 13, 2024

--

You know when Ted Cruz and FOX News jump in on the math wars happening in California — mostly around the almost cartoonish misrepresentations of the California Math Framework — you know the whole debate/discourse has jumped the shark.

That’s because people are swerving out of their lanes like drunk drivers in to the oncoming traffic of K to 12 mathematics education. We can excuse politicians and media, well because, they are politicians and media. They are only good at selling provocative and distorted truth.

If Ted Cruz or FOX News knows that the sequence of numbers 2, 3, 5, 8, 13, 21, 34, etc. has an origin before Fibonacci’s rabbits (Hemachandra and Sanskrit poetry), I will delete this article and send an unsolicited apology to both.

But right-wing politicians and pundits behave foolishly in the realm of mathematics education because of their ignorance of the subject, so I am confident I won’t be drafting any such replies.

We can excuse them. But, we can’t excuse people like Brian Conrad, a mathematics professor at Stanford, who has gone out of his way and out of his league to suggest he, of all people, has superior knowledge of K -12 mathematics education. He talks down to educators, disrespecting us, he has tried many times to derail the CMF, and he has served a relentless and misogynistic tirade of harassment at professor Jo Boaler from his own institution.

I will assume Conrad is a great university professor. I will also assume he sucks at anything to do with K -12 mathematics education because, well, he has zero experience in that realm. Let’s be honest, some university professors — because they are up the academic food chain — think they can confidently talk about something they regard as “beneath them”.

The main reason for me writing this blog is to share a detailed response to Conrad’s political agenda from the CMF committee themselves.

_________________________________________________________________

Evaluating Perspectives on Mathematics Education Reform: A Critical Analysis of Opposition to the California Mathematics Framework.

Mathematics professor, Brian Conrad has drawn a great deal of attention for his critical reviews of the research used in developing the California Mathematics Framework (CMF). Opponents of reform in mathematics education point to his critiques as evidence that the entire framework is flawed. Unfortunately, many of the people who are promoting his critiques seem to have read neither the framework nor his critiques in any sort of detail. Prof Conrad has no experience in TK-12 mathematics education, nor does he have expertise in the area of educational reform. As shown below, prof Conrad’s work highlights a lack of understanding in the area of educational research and reform.

The California Mathematics Framework (CMF) underwent a thorough and multi-stage approval process conducted by the California Department of Education (CDE). This process began with the State Board of Education (SBE) forming the Mathematics Curriculum Framework and Evaluation Criteria Committee (CFCC) in January 2020 to guide the development of the initial draft framework, following insights gathered from focus groups in 2019. The draft was then open to public comment through two 60-day review periods in 2021 and 2022, allowing stakeholders such as educators, parents, and students to contribute. Numerous revisions were made based on this extensive feedback. Following the second 60-day public comment period, the framework’s research base was rigorously evaluated by the Region 15 Comprehensive Center, led by WestEd. The culmination of these efforts was the SBE’s final approval of the framework in July 2023, ensuring that it was not only comprehensive but also grounded in solid educational research and practices to provide guidance for school districts in California.

Below we provide a small selection of his arguments, that are typical of the larger pool.

Prof Conrad is positioning himself as an expert in mathematics education, even though he has limited experience teaching in a TK-12 classroom. The following examples shed light on his reasons for opposing the CMF, yet his perspective favors only a minority of the state’s student population. In particular, his critiques highlight research that positions all students as capable of accessing rigorous mathematics, including multilingual learners and students with disabilities. Notably, prof Conrad aligns with the past beliefs of individuals within Stanford’s mathematics department, historically opposed to mathematics reforms, as documented by various authors.

Most of the critiques on prof Conrad’s website apply only to an old, first draft of the framework, not the version that was adopted by the state in July 2023. More recently, prof Conrad has released a new set of 10 critiques, which are also sampled below. These are his words that accompany the second critique:

“I warned the SBE more than a year ago that all citations must be checked for accuracy, since the CMF has no more credibility on this front; such thorough checking has not been done.” Brian Conrad

In prof Conrad’s points, which are still being shared in the service of discrediting the California framework, he uses a form of critique that is typical in mathematics research but does not apply to education. The critique is known as “proof by contradiction” — a method in which one incorrect or unproven idea renders the whole proof invalid. Application of this method can clearly be seen in the first example below, in which prof Conrad claims an entire published study is a “dud” because one set of parents did not return information.

In all cases the critiques given by prof Conrad are weak, irrelevant, or reflect a lack of understanding of research in education. Prof Conrad has attempted, many times over, to wield his weak critiques to stop the mathematics framework in California. Those attempts have obviously failed, and he is now moving to halt reforms under consideration in the rest of the country, sharing the same set of claims that the California framework used flawed research.

Twelve examples are shared below, as illustration, the first ten come from his original document the last two from his more recent one.

  1. Esmonde, I., & Caswell, B. (2010). Teaching mathematics for social justice in multicultural, multilingual elementary classrooms. Canadian Journal of Science, Mathematics and Technology Education, 10, 244–254.

Conrad critique: Chapter 2, lines 973–981: The description here of the article (Esmonde, Caswell, 2010) is a mixture of false and misleading, so it has to be removed. Consider the “number book project” for a kindergarten class in this article that the. CMF highlights. The purpose was for the children’s families to share some stories. But the CMF omits the relevant fact that for the experience discussed in the article, there were no responses from the families (see page 11 of the article). How can the CMF promote an illustrative example from the literature and not tell teachers that in practice it was a dud?

“They then design classroom activities that draw on these number stories, songs, and games.” never actually happened in the case where it was tried. Given the CMF’s role as a source of guidance to math teachers, publishers, and school districts, it is a significant misrepresentation of a citation to present an idealized scenario and never mention that when tried it badly failed. [As a more minor matter, the “water project” for a 5th-grade class highlighted here in the CMF is presented on pages 9–10 of the article (and referred to in some later places), but it has virtually no math content (and certainly nothing at the 5th grade level). Hence, it makes no sense for this to be discussed in the CMF.]

Response: The paper Conrad cites describes a research study in which the researchers invited families to send in examples of mathematics used in their culture and homes. Because they did not receive responses from the kindergarten families, the researchers shared those that came from the second-grade parents. This is the reason Conrad gives for dismissing the whole article by calling it “a dud”.

The water project he describes as having “virtually no math content” is this:

The students kept water logs to track how much water they and their families used on a daily basis, and participated in water rationing challenges in which they problem-solved how they would use water if they were limited to only 400 L per day for a family. The work included topics of volume, capacity, multiplication, division, and proportional reasoning.

2. Data from the National Oceanic and Atmospheric Administration, National Centers for Environmental Information

Conrad critique: 7.3. Chapter 5, lines 1415–1422: The example from the literature in Figure 5.11 is significantly misrepresented here and so has to be removed and replaced with something else. Indeed, if one looks up the original source (as I did), one sees that the plots representNO2 concentration as a weighted average (see the formula for Cj on page 3 of the cited 2017paper of Clark) among 210,000 blocks of population organized by 1% ranges of non white population in each. In other words, this is not representing change over time and space as said in the CMF, but rather over time and race (so to speak) where the effect of “space” is wiped out by the weighted averaging over population blocks. So this is not in any way an example of “change over time and change in space”. Hence, this has to be replaced with an actual example of such in order to illustrate the CMF’s intended message about variation across time and space.

Response: The purpose of this figure is not to showcase “change over time and space,” as Conrad suggests, but rather to convey the crucial fact that sample size impacts the representativeness of samples. Figure 5.11 displays tables with data samples, demonstrating the variance between small and large random samples and their respective representativeness concerning the entire population. It serves as an illustrative tool to emphasize the significance of sample size in the field of statistics, a fundamental concept students must grasp at this educational level. The goal is not to convey geographical or temporal shifts in the data but to provide students with a foundational understanding of statistical principles.

3. Siegler, R. S., & Ramani, G. B. (2009). Playing linear number board games — but not circular ones — improves low-income preschoolers’ numerical understanding. Journal of educational psychology, 101(3), 545.

Conrad critique: “The CMF selectively cites research to make points it wants to make. For example, Siegler and Ramani (2008) is cited to claim that “after four 15-minute sessions of playing a game with a number line, differences in knowledge between students from low-income backgrounds and those from middle-income backgrounds were eliminated”. This passage, especially the use of the word “eliminated” and the absence of context, suggests a dramatic effect at many levels. In fact, the study was specifically for pre-schoolers playing a numerical board game similar to Chutes and Ladders and focused on their numerical knowledge.”

Response: Conrad’s claim that the passage is absent of context reveals his apparent lack of familiarity with the way research is referenced in education. The full citation is given so that people reading the framework can go to the study for the wider context. His critique seems to come down to the framework’s use of the word “knowledge” instead of “numerical knowledge” (which is assumed by educators). The wording could be improved and be more specific, but the fact that the word “numerical” is, arguably, missing, does not negate the entire study.

4. V. Menon, “Salience Network,” in Arthur W. Toga, ed., Brain Mapping: An Encyclopedic Reference, vol. 2 (London: Academic, 2015), 597–611.

Conrad critique: The CMF says on lines 382–383 that “Another meaningful result from studies of the brain is the importance of brain connections.” Citing a 2015 paper by Menon et al. (which I’ll call (Iuculano, 2015) because Iuculano is the first author named on the paper).

Response: This is not the paper that was referenced, the paper that was referenced was: V. Menon, “Salience Network,” in Arthur W. Toga, ed., Brain Mapping: An Encyclopedic Reference, vol. 2 (London: Academic, 2015), 597–611.

His entire critique that the Iuculano study does not show evidence of connectivity is irrelevant as that was not the study cited. Importantly, multiple studies have shown the importance of brain connections in mathematical learning (Feigenson, Dehaene, and Spelke, 2004; Hyde, 2011).

5. Burris, C. C., Heubert, J. P., & Levin, H. M. (2004). Math acceleration for all. Educational Leadership, 61(5), 68–72.

Conrad critique: In yet another case, the CMF cites Burris et al (2006) for demonstrating “positive outcomes for achievement and longer-term academic success from keeping students in heterogenous groups focused on higher-level content through middle school”. But the CMF never tells the reader that this paper studied the effect of teaching Algebra I for all 8th grade students (getting good outcomes) — precisely the uniform acceleration policy that the CMF argues against in the prior point.

Response: This is the description of the study in the CMF, clearly stating that the schools “ended tracking in mathematics and gave all students access to the more advanced three-year curriculum sequence.”

The critique, that the word “algebra” was not used, is not valid, as the course was “integrated 1", and the details are in the study for anyone who wants to read it:

“One racially and economically diverse New York middle school that successfully accelerated all of its students offers an example of the conditions that enabled stronger outcomes. The school ended tracking in mathematics and gave all students access to the more advanced three-year curriculum sequence that had previously been reserved to a smaller number. This sequence included in eighth grade the Mathematics I integrated course normally offered in ninth grade. Researchers followed three cohorts in the earlier tracked sequence and three cohorts in the more rigorous untracked sequence. They found that both the initially lower and higher achieving students who learned in the later heterogeneous courses took more advanced math, enjoyed math more and passed the state Regents test in New York sooner than previously. This success was supported by a carefully revised curriculum in grades six through eight, creation of alternate-day support classes, known as mathematics workshops, to assist any students needing extra help, and establishment of common planning periods for mathematics teachers so they could develop stronger pedagogies together” (Burris, Heubert, and Levin, 2006).” Pp 660–674 in CMF

6. Committee of Ten Report: 3.1. Chapter 5, lines 1590–1593:

Conrad Critique: The myth here about the Committee of Ten from 1892 — that it promoted a high school math curriculum specifically focused on preparing for calculus — is false but has been repeated ad nauseam in the media for at least several years.It has to be removed. If one goes back (as I did) to read the mathematics section on pp. 104–116 and the general subject-area grade-level recommendations for math on pp. 35–51 of the original 1892 report (which all CMF writers and CFCC members could have done since the entire report is linked near the bottom of the Wikipedia page about the Committee of Ten), one can see that the way this committee’s work on math is described in the CMF is highly misleading and false. The high school course sequence recommended by the committee was not specifically designed for calculus preparation. Indeed, the report explicitly included two options, one having “bookkeeping and commercial arithmetic” for sophomore and junior years as alternatives to further algebra. It also required geometry, and only those seeking scientific or technical degrees were recommended to take a 4th year of math, in trigonometry. This was a sensibly balanced proposal, not skewed toward the goal of calculus. There is no mention of calculus anywhere in that report, and the fact that the pathway advocated for those planning to pursue a scientific or technical degree in college consisted of material in algebra, geometry, and functions leading to calculus is hardly a surprise: The CMF has to stop spreading the false myth about the Committee of Ten from 1892. That myth has misled CFCC members into believing incorrectly that the content of the conventional math curriculum is obsolete. It has also misled others in positions of authority (such as staff in the UC Office of the President) into believing incorrectly that the traditional math content is “limiting” (see slides 13 and 17 of this slideshow), whereas in reality the traditional math content (which can certainly be provided with more contemporary motivation) keeps all STEM options open; it is not “limiting” at all. I have only ever seen one discussion in the media for which the author clearly read the original 1892 report, and that author is also a mathematician.

Response: The CMF did not say that the Committee of Ten set out math courses that only prepared students for calculus, the CMF said this:

The traditional sequence of high school courses — algebra, geometry, algebra 2 — was standardized following the United States, the “Committee of Ten” reports in the 1890s. The course sequence — which was primarily designed to give students a foundation for calculus — has seen little change since the Space Race in the 1960s.

There is nothing inaccurate in this paragraph.

7. Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Granada Learning.

Conrad critique: Chapter 12, lines 221–228: This description of a 1998 study by Black and Wiliam on methods of assessment is a substantial misrepresentation, oversimplifying a complicated process, and hence it has to be removed. Frmative assessment is essentially measuring knowledge during the learning process to provide instant feedback, as opposed to the more traditional cumulative assessment that is called summative.) These claims are so hard to believe that one has to look up the study to see if this is really what it says. The answer is negative.

Firstly, the study acknowledges that teachers develop effective formative assessment slowly, via professional development. It is not any kind of “quick fix”, as the CMF seems to be suggesting here. Also, Black and Wiliam were analyzing the effectiveness of formative assessment along with an array of other innovative teaching practices, about which the CMF says nothing.

Response: Conrad is erroneously suggesting that the CMF overlooks the inclusion of summative assessment. However, his assertion is inaccurate. The CMF states, “Summative assessments have the potential to be anxiety-inducing for students, so some best practices should be implemented to minimize damaging effects.” The CMF proceeds with a table of best practices for summative assessment.

He also misrepresents the difference between formative and summative assessment. Formative assessment is assessment that informs learning, and includes no requirement, at all, to be “instant”. Summative assessment is assessment that summarizes achievement, and does not in any way need to be cumulative.

Nowhere did the CMF suggest that formative assessment was a “quick fix.” By contrast it is described in the CMF as “the collection of evidence to provide day-to-day feedback to students and teachers so that teachers can adapt their instruction and students become self-aware learners who take responsibility for their learning” This takes place gradually over the school year.

8. Butler, R. (1987). Task-involving and ego-involving properties of evaluation: Effects of different feedback conditions on motivational perceptions, interest, and performance. Journal of educational psychology, 79(4), 474

Conrad critique: Chapter 12, line 599: Here the CMF considers the choice among three options for evaluating classwork: giving it a grade, giving diagnostic feedback and no grade, or giving both such feedback and a grade. Work on this topic in (Butler, 1987, 1988) is cited, and it is said that the cited research shows groups of 5th and 6th grade students who got feedback and no grade “achieved at significantly higher levels” than such groups that got either of the other two treatments (for which the group-level achievements were comparable). We shall see that the CMF significantly misrepresents the scale and scope of Butler’s work, so this all has to be removed. It’s unclear what “achieved at significantly higher levels” means, especially when comparing with a group that seems to have never received numerical grades. We will come back to this. The CMF also says Butler arrived at some conclusions for the top and bottom quartiles by GPA within each experimental group:

“. . . both high-achieving (the top 25-percent grade point average) and low-achieving (the bottom 25-percent grade-point average) fifth and sixth graders suffered deficits in performance and motivation in both graded conditions compared with students who received only diagnostic comments.”

But the CMF doesn’t explain how a group that gets only diagnostic feedback and no grade has any meaningful concept of GPA (= grade-point-average), a puzzle which will be demystified when we discuss what Butler actually did (which is not what the CMF writing suggests).

Response: The achievement of the students who did not receive grades, stated to be at significantly higher levels, was exactly that, quoting the study:

“The comparison for the task-involved composite confirmed that pupils who had received comments scored higher than pupils who had received grades and praise F([, 192) = 30.5, p < .001, MS, = 2.61, and that these pupils scored higher than pupils who had received no feedback, F{ 1,192) = 19.5, p < .001, MSt = 2.61.” (Butler, 1987, p477)

The study, as is typical in educational research, worked with students with different GPA’s and delivered an intervention through which the students undertook tasks and received different forms of feedback. Researchers looked at the impact of the feedback on students’ performance on assessments the students took and on their motivation (using surveys). Conrad thinks it is a puzzle that the students have a GPA, but he is conflating the research intervention approach (when some students did not receive grades) with their history of grading and receiving GPA’s as mathematics students.

9. Reeves, D. B. (2006). The learning leader: How to focus school improvement for better results. ASCD.

Conrad critique: Item 4 says that grading on a 100-point scale is “mathematically egregious” (what does that mean?), and says there should be just grades of 0, 1, 2, 3, 4. For all exams, even longer ones? With such a blunt system, how are kids who get 3 supposed to know whether they were close to a 4? When taking an exam, there is genuine information conveyed to a student when they score 82% versus 89%; now it should all be subsumed under “3”? How is there any notion of partial credit with this suggestion?

Response: The writing in the framework was talking about grading practices, not the taking of examinations, which could be reported out of 100.

The “mathematically egregious” statement refers to the averaging of grades on a typical 100 point scale. When schools use a 100 point scale for grading they typically give zero points for any incomplete, missing, or failed assignment. Douglas Reeves (2006) has shown that the gap between students receiving an A, B, C, or D is always 10 points, but the gap between a D and an F is 60 points, when the 100 point scale is used. This means that one missing assignment could mean a student drops from achieving an A for a class to getting a D.

Douglas Reeves’s recommendation is to use a 4-point

scale:

A = 4

B = 3

C = 2

D = 1

F = 0

in which all intervals are equal, rather than:

A = 91+

B = 81–90

C = 71–80

D = 61 -70

F = 0

Conrad’s critique is missing the point of the section, which is about grading practices.

Conrad also questions the use of “partial credit” in a mastery-based grade. This critique distinctly indicates a lack of comprehension regarding standards or mastery-based grading, a method integrated into numerous elementary school grading systems since the advent of Common Core. Yet again, this underscores a limited understanding of TK-12 educational context.

Furthermore, his question about how students know whether they are close to a 4 can be clarified by referring to literature in the CMF such as the key practices outlined in Linquanti (2014):

“Establish clear learning goals and success criteria for lessons, and ensure students understand and agree with what these mean and entail;

-Take pedagogical action based on evidence of learning and provide students descriptive feedback linked to intended instructional outcomes and success criteria. Feedback during lessons helps to scaffold students’ learning by helping them to answer:

-Where am I going?

-Where am I now?

-What are my next steps?

-Foster a collaborative classroom culture where students and teachers are partners in learning.”

10. Case of a student with multiple special educational needs achieving at the highest levels.

Conrad critique: This brief discussion of a student labeled low-IQ as a child and who went on to later earn an applied math PhD from Oxford is omitting the crucial information that he was dyslexic. That fact completely explains the original low-IQ diagnosis, and thereby makes this case entirely unrepresentative for claims about mathematical excellence revealing itself only later in life.

Response: This is the extract from the CMF:

“Mathematical excellence can develop or reveal itself at any life stage. Consider, for example, Nicholas Letchford, who started school labeled as having a low IQ and significant special educational needs. He went on to graduate from Oxford University with a doctorate in applied mathematics (Letchford, 2018).”

The mention of a student who excelled in mathematics, despite having multiple documented special educational needs, was included to show that students can overcome barriers to excel in mathematics. The fact that one of his documented special educational needs was dyslexia does not diminish this important message.

New Conrad Critiques:

11. Esmonde, I., & Caswell, B. (2010). Teaching mathematics for social justice in multicultural, multilingual elementary classrooms. Canadian Journal of Science, Mathematics and Technology Education, 10, 244–254.

Conrad critique: Chapter 2, lines 474–478: The article (Esmonde, Caswell, 2010) is still being invoked in a manner that is misleading to the reader (though not as misleading as before) and so the corresponding passage “For example, the Number Book Project . . . activities.” should be removed. The CMF promotes that reference’s “number book project” without mentioning it was a total dud in practice (see page 11 of the cited article). A 2nd-grade student shared a poem with a kindergarten class, which then did an activity on the numbers 1 to 20; this has nothing to do with the preceding text on lines 472–474 that the CMF purports it illustrates. There is no detail that any teacher can use to do anything of the type which is suggested. It is devoid of content.

Response: Conrad is again working to dismiss the Esmonde, I., & Caswell, B. (2010) article: Teaching mathematics for social justice in multicultural, multilingual elementary classrooms, claiming it is a “dud” because one set of parents did not return data.

12. Iuculano, T., Rosenberg-Lee, M., Richardson, J., Tenison, C., Fuchs, L., Supekar, K., & Menon, V. (2015). Cognitive tutoring induces widespread neuroplasticity and remediates brain function in children with mathematical learning disabilities. Nature communications, 6(1), 8453.

Conrad critique: Chapter 1, lines 174–187: Here the CMF cites a paper (Iuculano et al., 2015) analyzing how activity levels of various brain regions for kids with math learning disabilities (MLD) changed in response to intensive tutoring, compared with a control group of non-MLD kids. Most earlier misrepresentation about this paper has been fixed, but the work is still being misrepresented to promote a pseudo-scientific narrative about neuroscience. The CMF describes the study this way:

“After eight weeks of one-on-one tutoring focused on strengthening student understanding of relationships between and within operations, not only did both sets of students demonstrate comparable achievement, but they also exhibited comparable brain activation patterns across multiple functional systems (Iuculano et al., 2015). This study is promising, insofar as it suggests that well-designed and focused math experiences may support brain plasticity that enables students to access and engage more productively in the content.”

This description is misleading because its phrase “comparable achievement” in the context of “student understanding” will cause most readers to think the study shows MLD kids can be brought to comparable levels of academic achievement in regular math classes with targeted tutoring. But the cited work was a study of 30 students ages 7–9, and its focus was on simple arithmetic tasks (such as adding small numbers).

Response: The framework states the findings of the study — that after a targeted mathematics intervention, students identified as having learning disabilities, who had previously achieved at significantly lower levels than students without disabilities, achieved at the same levels. These are quotes from the study:

Moreover, we found evidence for performance normalization in MLD: before tutoring, children with MLD were significantly less accurate than their TD peers (t(28). “2.318, P.0.028, Cohen’s d.0.85), while their accuracy performance after tutoring did not differ from TD children at pre-tutoring (t(28).0.471, P.0.64, Cohen’s d.0.17), or post-tutoring (t(28). “0.598, P.0.55, Cohen’s d.0.22; Fig. 1c). (…)

Critically, these results were replicated in a separate arithmetic problem-solving task performed outside the scanner in which, instead of verifying addition equations, children were asked to verbally generate the answer to addition problems (Methods). Here again, performance differences that were evident between MLD and TD groups before tutoring (t(25). “2.631, P.0.014 Cohen’s d.1.01), were entirely absent after tutoring (t(25). “1.141, P.0.26, Cohen’s d.0.44; t(25).0.007, P.0.99, Cohen’s d.0.01 for TD’s performance before and after tutoring, respectively; Supplementary Fig. 1).

Conrad’s claim that the statement that the students achieved at equivalent levels after the intervention will cause readers to believe that MLD students can be brought to comparable levels in regular math classes, is inaccurate. Educators are fully aware that studies show a particular case of achievement. More interestingly perhaps, his continued push to discredit evidence that shows the improvement of students with special needs, seems to be revealing his motives for trying to discredit the framework.

Conclusions

These are just twelve examples, but they are typical of the quality of prof Conrad’s critiques, which reflect a lack of understanding of educational research. This is not surprising as it is not his field, and his application of the mathematical approach of finding one error or difference in interpretation in a study, and then calling the whole study a “dud” is inappropriate. The work of the Math CFCC subcommittee falls under the purview of the Bagley-Keene Open Meeting Act. This directive states that all the subcommittee meetings were open to the public, and each meeting provided an opportunity for public comment. The Math CFCC members carefully reviewed both oral and written public comment when submitting ideas for review. Prof Conrad did not submit written suggestions, nor engage in the statutory process during the proceedings.

Prof Conrad approached his review of the framework with an agenda: a number of his critiques target research showing the potential of all students to learn, and research sharing schools relating mathematics to students’ cultures. Nowhere in the news articles that applaud prof Conrad’s critiques is this agenda considered.

--

--