## An interesting area problem

Here’s an interesting little question for you:

*Have you worked it out? How long did it take you to see it?*

It took me a few seconds at least, I had screenshotted the picture and was reaching for the pencil when the penny dropped, and that’s why I thought it was an interesting question.

The answer is, of course, 100pi. This follows easily from the information you have as the diagonal of the rectangle is clearly a radius – the top left is on the circumference and the bottom right is on the centre.

*So why didn’t I spot it immediately? *

I think the reason for me not spotting it instantly might be the misdirection in the question, the needless info that the height of the rectangle had me thinking about 6, 8, 10 triangles before I had even discovered what the question was.

I see this in students quite often at exam time, they can get confused about what they’re doing and it links to this piece I wrote earlier about analogy mistakes. The difference is I wasn’t constrained by my first instinct but all too often students can be and they can worry that it must be solved in the manner they first thought of.

Earlier today a student was working on an FP1 paper and he was struggling with a parabola question, he had done exactly this, he had assumed one thing which wasn’t the right way and got hung up in it. When he showed me the problem my instinct was the same as his, but when I hit the same dead end he had I stepped back and said “what else do we know”, then saw the right answer. I’m hoping that by seeing me do this he will realise that first instincts aren’t always correct.

I’m going to try this puzzle on all my classes tomorrow and Friday and see if they can manage it!

*How quickly did you see the answer? Do you experience this sort of thinking from your students? I’d love to hear any similar experiences.*

Cross-posted to Betterqs here.

## Passivity in the maths classroom

Today I managed to find a few minutes to browse the latest issue of Maths Teaching, the ATM journal. One article that caught my eye was the “from the archive” section, where Danny Brown (@dannytybrown) introduced an article that was first published in 1957. The article was written by Ruben Schramm and is entitled “The student’s passive attitude towards mathematics and his activities.”

The article discusses mathematics teaching, particularly the nature of students who often, for whatever reason try to find an algorithmic method to follow to solve a problem, looking to recognise the problem and answer it in a similar way to how they have answered questions before. This is a problem that was obviously prevalent in the 1950s, as evidenced in the paper, but it is still prevalent now, and I feel the nature of our exam system must at least hold a portion of the blame. The questions on exams tend to be very similar and students will learn methods to answer them whether the teachers like it or not. This is one issue I hope will be dampened a little with the upcoming changes to the exams.

Schaum suggests that this passivity in maths, this tendency to look for algorithms, is in part down to how students see mathematics. He suggests that when they see teachers solve problems on the board by delivering a slick, scripted solution they can get a feeling that it is via “witchcraft” and see the whole process of uncoordinated steps, rather than a series of interconnected mathematical ideas. The latter would encourage the students to drive the mathematics from their internal ideas, and this would lead to them being more able to apply their knowledge in new contexts. If we can develop this at all levels then I feel we really would be educating mathematicians – ie giving students the skills to be able to apply their knowledge in new contexts, rather than teaching them to follow a recipe to answer a question.

Schaum goes on to discuss authority, the infallible authority that students see in their teachers and in the mathematical theorems and formulae. It is suggested that students see these theorems as infallible, and as such they reach out for them in their memories and try to apply them to problems. This can mean that the problem they are applying them too is only vaguely similar to the problem the theorem or method is actually there to solve. Schaum calls these “analogy mistakes”, and suggests that it is down to how comfortable with the content students feel that mean they revert to them. I feel that this is true in part, but that also the pressure of exams can lead students to confuse things in their head if they have opted to learn algorithms rather than looking to develop a deeper understanding.

I’ve had a couple of examples of these “analogy mistakes” in lessons and exams recently. A year 12 student came to an afterschool elective as she was trying to solve some coordinate geometry problems involving tangents. She had gotten herself really confused because in her notes she had written tangent gradient is perpendicular (when discussing circles) but she didn’t think it should be perpendicular because a tangent at a point should have the same gradient as the curve. I spend a little time discussing where her misconception had come from (her notes should have said “perpendicular to the radius”) and discussed how she could remember this more easily if she has thought about the graphs and sketched them.

Another example was in a recent exam one of my students had answered part of a question on alternative from incorrectly, she had done the alternative form bit well and the answer was 25 Sin(x + a), but it then asked her for the maximum she had written -25. When I questioned her about this after it seems she had fallen victim to an “analogy mistake”, she had remembered that “maximum is positive” when discussing second derivatives and in the pressure of the exam this memory had taken over, rather than the rational thought process that should have flagged up that the maximum or the function would be 25, which is definitely bigger than -25.

In his preface Danny Brown suggested that one way to counteract this would be by questioning and discussion, if we remove the authority from the discussion and don’t validate the answers by issuing statements saying they are correct or incorrect, but rather open them as conjecture to the class who then can discuss this, then we can allow students to develop their own mathematical ideas. Lampert (2001) also discussed this idea and suggests that as teachers we need to be striking the right balance between allowing students to discuss and conjecture and ensuring they understand what is important and aren’t making mistakes. This is something I strive for in my own classroom, and something I am currently working on trying to improve.

*This post was cross posted to Betterqs here.*

## Further thoughts on the white paper

Recently I read the white paper “Educational Excellence Everywhere”, it’s an interesting document, and I wrote my initial thoughts when I heard the headlines on Academies here then my initial thoughts having read the first chapter here. Since then I have read the rest of the white paper and have digested it and I wanted to share some of my thoughts in it, discounting thoughts on whole scale academisation as I’ve written about that before.

**Great teachers everywhere they’re needed**

My first thought when reading this chapter title was “surely that’s everywhere?” The section focuses on getting the best teachers into the most deprived areas using cash and promotions as incentives. I can certainly see a need for this, but I worry that there could be negative outcomes for some.

If all the good teachers go to the struggling areas, who’s left to teach those kids in the middle ground, not deprived enough to be in one of the key areas but not rich enough to be at a fee paying school?

I also worry that those gaining these promotions would be the game players, the ones who put their own results above everything else, including their students. The type of leaders who push students into courses they have no interest in and wont benefit from because they will gain a good grade that reflects well on them. These are not the sort of people we want to be putting in charge.

In fact, it is the prevalence of leaders like that, who assign much more importance to some kids than others because of the effect they will have on the results, that leads to the most able kids from disadvantaged being more likely to fall behind those with similar prior attainment but a more advantaged background. This is usually as schools forget abut these more able students and those from disadvantaged backgrounds have less help outside school.

**Recruitment and retention **

The white paper acknowledges the recruitment and retention crisis and suggests some ways in which it will try to improve the situation. The aims of reducing bureaucracy and workload are certainly well meaning and would benefit not only retention but the quality of teaching. Some of the ideas mentioned – ie the possibilities for replacing QTS – however seem like they will in fact be more paperwork heavy.

**Leadership **

The idea of improving leaders in our schools to improve teaching and also retention is a good thing. The incentives they will offer and the alterations to accountability framework to be more progress based should encourage more great leaders to take up roles in challenging schools.

I’m very much in favour of the move from threshold passes to progress, but I’m worried that attainment 8 and no of grade 5 and above will actually be the important measures in practice, so I’m waiting with interest to see how it plays out.

I like the idea of improvement periods, which give new heads a god length of time to turn around schools deemed to be requiring improvement. I did wonder how this would track to heads who took over just before the inspection, and I worry that there seems to be a suggestion that an RI grading would mean a new head.

**Fair funding formula **

There wasn’t enough technical details here for me, but in principle it sounds like they are considering all the right things – levels of disadvantage, needs of pupils, needs of a school (ie more money to rural and island schools who would go under otherwise as they serve communities with too few children to fill the schools).

**Parental involvement**

The aim to have all schools involve parents more is a noble one, and one that should be striven towards, however I have recently come across some research that showed in disadvantaged areas of california that policies to discourage parental involvement actually had a positive effect while those that encouraged it didn’t. This suggests we need to look at how we are involving parents and make sure that it is I’m a manner that is beneficial to all.

**The College of Teaching **

I’ve been a little reticent to get behind the college of teaching, it seemed at first to be the answer to a question no one was asking and that it wouldn’t have any benefit. The white paper, however, suggests that a large part of its role will be ensuring teachers have access to educational research and are involved in creating it through their own journal. This is a positive thing in my view, as are the ideas they have regarding ensuring the profession is more savvy when it come to research and evidence to stop any more fads like brain gym gaining footholds in the shared consiousness.

*What are your views on the white paper? I’d love to hear whether you agree or disagree with anything I’ve said. I’d also be interested to hear if you picked up on anything I’ve not mentioned or if you took a different inference to something in the white paper than I did. Feel free to comment here or contact me via email or social media.*

## Effective Pedagogy

Recently I’ve done a fair bit of reading for my dissertation and two of pieces of literature have had very similar titles, there was The Effective Teaching of Mathematics (Simmons 1993) mentioned here, and then there was “The effective teaching of mathematics: a review of research” (Reynolds and Mujis 1999).

It is the second one which I want to share some thoughts on today. It is an interesting article which is aimed at school leaders and policy makers and looks to a variety of sources to create an idea of effective maths teaching.

The main areas it looks at are pieces of teacher effectiveness research, both from the UK and from the USA, and professional evidence on teacher effectiveness from the UK – namely the three most recent reports on maths teaching from Ofsted (most recent as of 1999).

**Whole class teaching**

This mixture of academic and professional evidence is analysed and brought together and the article finds that all three areas suggest that “whole class teaching” is the most effective way of teaching maths. That isn’t to say they suggest that we all lecture to silent classes for entire lessons, rather they are advocating a form of “active” instruction, which would punctuate the instruction with questioning to assess the learning and to see where the class needs expanding on and opportunities for practice and consolidation.

This idea seems to make a lot of sense to me, the teachers are the experts in the room, and they are best placed to pass on the knowledge. Listening to a well planned presentation and then internalising this and practising to make sense of it seems a good model.

**Group work**

While I was reading this it all seemed very sensible, intuitive and a great way to teach mathematical content, but I started to wonder how the other side of mathematics, the logical thinking and problem solving side, would be catered for in this model. Obviously the writers of the report felt the same as they then moved on to looking at group work and other ways to build problem solving ability into your students.

They looked at the idea of group work, suggesting the opportunity to discuss their mathematical ideas with peers and work out between them how it works would be beneficial. They also feel that scaffolding could enable all students to work within their zone of proximal development, allowing all students a chance to develop. They expressed concerns around social loafing, and the possibility of student misconceptions being reinforced.

Their findings led to many examples of group work being an effective tool in problem solving, but they state that to reap the rewards teachers need to spend a lot of time setting it up. I can see that this may be true, and feel that there could be a place for small group work to tackle these types of problems, especially amongst A level students and others who need to work out how to apply the knowledge learned to solve unfamiliar problems.

The article suggests that group work can be integrated into the active instruction model, taking the place of some of the practice section, and I certainly agree that it could fit. I also feel that modelling a problem solving approach for part of the instruction element of the lesson can give students an insight into how a more experienced mathematician would approach a problem.

**Differentiation**

A rather interesting finding was that poorer, less effective lessons often include overly complex arrangements for individual work. This was a suggestion that those lessons where the teacher has spent all night creating separate worksheets for each student actually had little to no impact, even a negative impact at times. This certainly suggests that this level of time consuming differentiation is unnecessary and that tasks can be differentiated far more easily and effectively by producing a resource that is stepped in difficultly and allowing different start points or moving them on more easily.

*I found that this report was very interesting, it backed up some of the ideas I already had on effective maths teaching and challenged some of the other ones. I am now planning to trial some small group work with some A level students to build problem solving capability. *

**Reference**

Reynolds D & Mujis D, 1999, The Effective Teaching of Mathematics: A Review of the Research, *School Leadership and Management,* Vol. 19 **(3)** pp 273-288 (available online here.)

Simmons M, 1993, *The Effective Teaching of Mathematics*, Longman: Harlow

## Accuracy with Trigonometry

*This post was originally posted here on Cavmaths and here on BetterQs, on 5th March 2016, however the original post somehow got deleted so I’m re posting it.*

This week I was planning to cover upper and lower bounds with year 11 as on the last mock a lot of them made mistakes so I felt it would be a good topic to revise. As part of the planning process I had a look through the higher textbooks our department has bought for the new specification GCSE *(we bought the Pearson ones, the full suite at KS3 and 4. Some great questions in them and the online version, activeteach, is great to take questions and place into your lessons. I’d definitely recommend it, if used correctly, but I will admit to being disappointed to see a formula triangle being advised…)* to see if there were any good questions I could pilfer, and I came across the section on using upper and lower bounds in trigonometry.

My first thought was, “that’s a nice topic”, and then the full spectrum of the topic began to unfold.

Initially, I had like the idea that students would be required to think about the fraction, and how minimising the denominator actually maximises it, but thin I remembered the nature of the cosine function! This example shows what excited:

Not only would students be required to understand the nature of a fraction, they’d also need a deep understanding of the cosine function itself, to understand that the bigger cos x is, the smaller x is, and vice versa (where x is between 0 and 90 of course). This could be a real deep understanding of the graph, or the unit circle, or just the geometry of a right angled triangle.

The example itself is very procedural based, which is a shame, but it does give a teacher a good frame to start discussions. I wouldn’t use textbook example as teaching anyhow, just as an additional example to talk through one on one with students who were still struggling.

The textbook goes on to pose this awesome discussion question:

A real nice prompt to get an in depth discussion around the trig ratios going. I often use similar prompts when looking at maximum values for sine and cosine “what’s the biggest opp/hyp can ever be?” for example. This often gives a nice discussion focus.

I think that this topic shows how different the new specification will be. Students are going to need a much deeper relational understanding if they are to achieve the top grades with questions like this being posed.

*What do you think of bounds being questioned in relation trigonometry? Have you used prompts like this before? How have you found them?*

## Hippocrates’s First Theorem

Over the half term I was doing some reading for my MA and I happened across Hippocrates’s First Theorem. (Not THAT Hippocrates, THIS Hippocrates!)

Here is the mention in the book I was reading (Simmons 1993):

It’s not a theorem I’d ever come across before, and it doesn’t seem to have any real applications, however it is still a nice theorem and it made me wonder why it worked, so I set about trying to prove it.

First I drew a diagram and assigned an arbitrary value to the hypotenuse of triangle A.

I selected 2x, as I figured it would be easier than x later when looking at sectors.

I then decided to work out the area of half of A.

A nice start – splitting A into two smaller right angled isosceles triangles made it nice and easy.

I then considered the area b. And that to find it I’d need to work out the area the book had shaded, I called this C.

Then the area of B was just the area of a semi circle with the area of C subtracted from it:

Which worked out as the area of the triangle (ie half the area of A**)** as required.

This made me wonder if it worked for all triangles that are inscribed in semi circles this way – ie the areas of the semicircles on the short legs that fall outside the semicircle on the longest side equal the area of the triangle.

My first thought was that for all three vertices to sit on the edge of a semi circle in this was then the triangle must be right angled (via Thales’s Theorem).

I called the length eg (ie the diameter of the large semi circle and the hypotenuse of efg) x and used right angled triangle trigonometry to get expressions for the two shorter sides ef and fg. Then I found the area of the triangle:

I then considered the diagram, to see where to go next:

I could see that the shaded area needed to be found next, and that this was the area left when you subtract the triangle from the semicircle.

I could now subtract this from the two semi circles to see if it did equal the triangle.

Which it did. A lovely theorem that I enjoyed playing around with and proving.

*I think there could be a use for this when discussing proof with classes, it’s obviously not on the curriculum, but it could add a nice bit of enrichment.*

*Have you come across the theorem before? Do you like it? Can you see a benefit of using it to enrich the curriculum?*

**Reference:**

Simmons M, 1993, *The Effective Teaching of Mathematics*, Longman: Harlow

## A tale of two Graphs

For the last two years I’ve collected some amazingly bad graphs from election material, both that has come through my door, and that other people have sent me (see this and this.) My own MP provided so many gems that Colin Beveridge (@icecolbeveridge) started an Internet campaign to have people refer to these misleading election graphs as “Mulhollands” – after the man himself. This led to Colin and others tweeting him questions about his misleading graphs and one teacher, Adam Creen (@adamcreen) tweeting him with corrected “Mulhollands” that his Y9 class had completed.

This was obviously effective, as each time a piece of campaign literature has arrived since I have scoured it and there has only been one chart on any of them, and that was correctly drawn! A success! Bizarrely, as I have had many other folks looking out for them, this seems a success that has been widespread as I’ve not come across any hideously inaccurate graphs this year.

I did think that this election season would pass without any mention on this blog but today I came across two interesting graphs from a neighbouring ward. Both are accurate, but they tell very different stories, and reminded me a little of Simpson’s Paradox, without actually being directly related to it.

Exhibit AThis graph is from the Lib Dems in Horsforth ward for the Leeds City Council elections, it’s not wholly accurate in terms of the bar charts, but it’s near enough to not irk me too much. It shows that of the 5 previous local elections in the area the Lib Dems have won 3 and the Conservatives have won 2. They are using this to sell the idea that it’s only them or the Conservatives who can with the seat. However…….

Exhibit BThis is from a Labour party leaflet in the very same ward and it shows that in the last Local Election the Labour party candidate came second to the Conservative candidate and that the Lib Dem ended in last place. The inference here is that Labour are more likely to beat the Conservatives as they came second last time.

Both leaflets are presenting true facts selected to further their narrative, and both are presenting them accurately, although one could argue they are both a little misleading.

I’ve looked at the stats from the last few years, it seems that on general election years the tories win by a fair way, but that in local election years it is tight between all three parties, but the lib dem vote has been steadily dropping. It could be an interesting ward to judge the national feeling on on Friday when the results come in, as it is really a 3 way marginal in non GE years.

The anomaly that is the general election year is interesting, more people do vote nationally when there is a GE, but the massive swing to the tories is fairly unusual, as they tend to be good at mobilising the vote. I do know that the Lib Dems in that constituency didn’t really campaign during the GE and the seat was a Tory Labour marginal, and that in a neighbouring Lib Dem Labour marginal the Conservatives didn’t campaign, so perhaps this had an effect.

If you have found any terrible election graphs, please send me them!## Share this via:

## Like this: