# The Devil in the Thinking Classroom

This week, I enacted a lesson in Dual Credit Precalculus using The Devil and Daniel Webster task. All this year, I’ve been following as best I can the recommendations from Peter Liljedahl’s Building Thinking Classrooms in Mathematics (BTC) book. I thought that this task was particularly rich, and an example of how well students can persist on a challenging task at this point in the year, when they’re used to thinking deeply about mathematics.

I first came across this task as an example in 5 Practices for Orchestrating Productive Mathematics Discussions (2nd ed. p. 99). In the book, they describe a lesson plan, and there are lesson plans available from the NCTM Illuminations site as well. I also located an article by Maurice Burke which extends the activity by employing a Computer Algebra System (CAS) inside a spreadsheet on the TI-Nspire, which I’d never seen done before.

I decided not to present the students with any written form of the task, since the recommendations from BTC is to present verbally. I also didn’t provide the specific questions, or a pre-printed table. I simply told the story, first explaining the upside, and then doing a “Oh, but I forgot” when it comes to the commission. This was a great hook, and the students chimed in with comments about not ever trusting a deal with the devil, and how it sounds too good to be true.

The first thing that needs clarification is that the written task above in my opinion does not help distinguish between the amount you have and the amount you’re being paid. Other published versions make this more clear. The intended problem is that the devil doubles the amount left at the end of the previous day, and that’s your current balance. When the written task says “I will pay you $1800,” it seems to suggest you get 1800 for that day, in addition to the 900 you received on day 1. However, based on the answers and all other discussions of this problem, that is not the intended interpretation. What I found doing this problem with students is that many of them tried to double the amount for the beginning of the previous day. So for example, getting$2000 for Day 2. I anticipated this confusion, and clarified it immediately because changing that set up is a different problem entirely, with different mathematical structure.

Groups (which were formed in a visibly random fashion they are used to) immediately went to their whiteboards (Vertical Non-Permanent Surfaces in the parlance of BTC), and started making tables “by hand” i.e. using a calculator but going day by day through the problem. Most of them created two or three columns of values in addition to the day number. Eventually, all groups concluded that it was not a good deal, and that Daniel Webster goes broke after 10 days.

The really interesting phase of the lesson began when I started to push them to vary the initial values, and ask questions to advance their thinking to a more abstract model of the scenario. I prompted some problem solving avenues such as trying to model particular columns or using tools like spreadsheets or CAS.

I was monitoring for student thinking by using an anticipation guide I prepared according to the 5 Practices framework. In the two classes, a variety of anticipated and unanticipated strategies emerged. I used the progression from Burke about spreadsheets to guide students to create formulas so that their spreadsheets would dynamically change when initial values are altered. One group conjectured a relationship between the initial payment and the day Daniel Webster goes broke. I anticipated that groups would try to write the exponential function for the commission, which several did. I also showed one student a strategy suggested in the lesson plan, where you suspend evaluation at each stage, leaving the expressions in terms of powers of 2, in order to notice a pattern. I had to go through a few stages as examples before the student was able to continue, but it was an interesting direction to attack the closed form.

Several groups pursued strategies I did not anticipate. For example, one student had a feeling that because doubling was central to the problem, the closed form would be some variation on 2x. They tried many different equations, using Desmos to quickly check if the graph and table matched their computed values. I tried to advance their thinking by helping them see that we needed part of the function to grow faster than 2x , but guessing that x2x should be involved was a bit of a stretch I think. Another group was trying to express a recursive idea, so I helped them clean up their notation for a recursive formula and then model it on the TI-Nspire CAS. It is necessary to use the closed form for the fee or commission in order to write a simple recursive formula for the money at the end of the day.

Something I didn’t think about ahead of time, because it wasn’t in any of the articles, was that if you take a recursive definition and parametrize it by replacing the given information by variables, the TI-Nspire is capable of making an abstract table directly from that definition.

This is similar to the approach taken by Burke, but it approaches the function from algebraic notation instead of coupled columns in a spreadsheet. I also found you don’t need “expand” for the calculator to give a form that is a recognizable pattern.

After the class was over, I considered connections to later topics in math. I realized that the recurrence relation we are studying is analogous to a non-homogenous first order linear ODE. After I made this connection, it made more sense why this recurrence does not immediately lead to an obvious closed form, and why you need some more heavier machinery to attack it. I’m not very familiar with the theory of non-homogeneous linear recurrence relations, but I’m assuming there are analytic solutions in the discrete case analogous to the continuous version, probably based on characteristic polynomials. The recurrence itself is not too complicated, but guessing that the solution should have an exponential term and a term like $x 2^x$ is highly unintuitive without having studied difference or differential equations.

I also created a Desmos graph where you can vary the parameters for the first day’s salary and commission and see the effect on the graph and table of values.

After I completed the lessons, I added the strategies that I did not anticipate in this first enactment to my anticipation guide. I will use that guide the next time I use this task. For anyone that is keen to teach this lesson, here is the complete anticipation guide

My overall thoughts after finishing this challenging and rich task is that it showed me a version of what I hoped for when I started this journey with Building Thinking Classrooms. I wanted to create an environment where students in a neighborhood public high school could engage in high level, authentic mathematical thinking. I’m excited for the rest of this school year as we continue to do more of these tasks, and hopefully reap the benefits of a Thinking Classroom. There’s a lot to learn about other parts of the framework, especially about getting the collective synergy of this classroom to translate consistently to individual achievement and understanding, but I am very pleased with the persistence of problem solving evident in this enactment of The Devil and Daniel Webster.

# Slide Rules, Affordances, Constraints, and Trig Identities

When I’ve taught trigonometry identities in the past, I sometimes have given students the prompt:

This constraint leads students to use the complementary angle identity. In the language of design, I removed an affordance from the tool. In an interview with Nat Banting I just heard, he talked about constraints in the classroom, and the pedagogical usefulness of obstructing students. That sounds strange, but constraints are generally acknowledged to foster creativity. That observation resonated with me, and reminded me of this problem.

This week, I was messing around with slide rules, to show students how great they have it now, and how annoying it used to be to find trig function values. While they filled in a table of values with a scientific calculator, I used the giant demonstration slide rule gathering dust on a shelf in a neighboring classroom. I was surprised and impressed with myself that I could calculate values of sine with a precision of about ±0.002.

One thing that intrigued me was the slide rule actually realized the constraint I had used in the past. Many slide rules have three scales for trigonometry: S for sine, T for tangent, and ST for small values of both sine and tangent (since they are nearly identical within the precision of the instrument). Therefore, finding cos(79˚) requires looking at the 11˚ mark of the S scale.

Conveniently, the complementary scale is often marked as well in red. Here’s an image from an online emulator (https://www.sliderules.org/react/raven.html):

This excursion into an obsolete calculating device reminded me how the history of calculation has profoundly influenced what is contained in the standards and curriculum of school mathematics. Even trigonometry itself once had much greater practical purpose. It was the toolbox needed to do calculations for navigation and astronomy, two areas of science and engineering which drove innovation across instrumentation and mathematics. In the era of GPS, far fewer people need to understand spherical geometry calculations. Someone needs to program navigational systems, but practitioners need different knowledge now, because the tools have different affordances.

The technology we teach with has a profound impact on what is considered important. If you always have access to a Computer Algebra System (CAS), then knowing how to find exact polynomial roots by hand is mostly unnecessary. You can argue for the benefits of learning factoring or certain formulae, but you can’t say that the average person, or even an engineer strictly speaking needs this knowledge. The question of when and how to provide students with a CAS is a fascinating one to me. The research I’ve examined indicates that having consistent access to CAS does not actually reduce procedural fluency. What it does do is significantly reduce computational burdens to doing advanced exploration of algebra. But perhaps selectively removing CAS access could force students to be creative in different ways.

As we continually gain greater access (free, on almost any platform) to ever more powerful technological tools for doing calculations, procedures which once seemed essential become vestigial. In a world where division by two is far easier than taking the reciprocal of the radical, rationalizing denominators has a use. Now it serves less of a purpose when a calculator can just as easily find the decimal value when required. Being able to algebraically manipulate a radical expression is definitely useful, but making the “simplified” form necessarily have a rational denominator is mostly pedantry at this point in time.

What other standards or pieces of content in the curriculum are holdovers from the necessity of calculation using tools with fewer affordances?

# Reusability and unleashing the potential of the Desmos Classroom Activity Community

One key to building complex software, especially with others, is the ability to reuse code.

I want to discuss some of the advances in the reusability of Desmos Activity Builder (AB) components and screens and Computation Layer (CL) code, and some of the things that in my opinion still need to happen to unlock the potential of the community to create activities collaboratively, to build up a library of useful components and templates, and to expand access for teacher-users who are comfortable doing light editing for their pedagogical needs, but aren’t interested in getting arms deep into the CL.

First, let’s mention a few features of AB and CL which have increased reusability.

Let us praise Copy/Paste. The addition of copy/paste for individual slides to Activity Builder greatly increased the ability of activity creators to repurpose code from other activities. Before, you had to either copy individual code snippets and graph expressions, or copy an activity entirely. That made marrying different ideas difficult. The other copy/paste which made life easier for pasting a Desmos graph by URL. This means that you can share an animation or particularly useful graph feature independent of an activity. It also means that one person can develop the graph and another the interactive activity, as long as they know what the graph will expect and expose.

The feature of being able to name a component as a variable increased the ability to abstract out the particular names which are referenced many times, and easily change the reference in only one place near the top of the code.

Collections have also been a boon to collaboration and reusing of components. It has increased the ability for creators to share an ongoing project and others to follow it. Some collections are already developing into library of templates or examples. I think the self-checking and assessment collections we’ve seen come around recently are testament to the power of being able to group similar functionality.

Now on to the areas for improvement.

The key issue I see in the reusability of Desmos AB screens and CL is lack of encapsulation resulting in too many hidden dependencies when copying screens and code.

Consider the analogy of a library function from a procedural programming language. To use it, all the software author needs to do is import the function from the library, and then call it within their own code. They never have to open the function’s definition, and rename variables. The variables within the function definition are encapsulated. By contrast, when you copy a screen or CL snippet from another activity, there are many places where there can be hidden dependencies, all of which must be cleaned up by the author before they can rely on that component’s behavior within their new activity.

Suppose someone makes a Desmos activity that has interactions across screens. For example, say there is a Button called “button1” which must be pressed to reveal a question in a Note component on the next screen. Now if you want to copy that second screen, it will not work as is in your activity. The editor will probably flag your reference to a non-existent component, but what if you also have a “button1” somewhere in your activity? Your Note component will be hidden until you realize that line is there. Similarly, when I try to re-use code from the CL of some component I like, I have to first deduce all the dependencies (if they are not documented), then replace all the references. If I don’t want that button disabling interaction, I then have to remove that line of code. It requires a high skill level, probably beyond what most teachers have time to do if they want to mix and match tasks or pieces of an activity for their own use.

In the documentation, CL is referred to as being more like a spreadsheet than a procedural programming language. It is true that CL is mostly about components referencing values that live in other components, to facilitate interaction. These connections are similar to how cells in spreadsheets can reference each other. Anyone that has built or worked with extremely large and complex spreadsheets can attest that reusing formulas and changing references is a pain. People who use spreadsheets professionally have developed style guides which help to alleviate some of this pain. For example, in finance it’s common to put any input parameters to a model in blue, so that the user of the model knows what they should change, versus what is calculated by the model. There are also programmatic features to protect sheets and cells.

(On a side note, the clutter that happens in CL due to the lack of a loop or array construct is an eyesore. There must be some syntactic sugar to avoid things like having to fill each cell of a table with a separate line of code. Spreadsheets use array auto filling to make the user’s life easier when repeating the same formula along a range. CL needs something to handle that)

I don’t have a clean solution for the problem of encapsulation and dependencies. I will leave that to folks who design programming languages to hash out. My guess is that the inspiration for an answer will come from an old system like SmallTalk or HyperCard. Message-sending or event-handling might be better paradigms than declarative spreadsheet formulas. For now, better warnings and better style might be the only mitigation.

Finally, the last thing that is needed to really unlock the potential of the community of Desmos AB creators is an open source platform which is navigable. Every public user-created activity on Desmos is open source, but they are notoriously hard to find, especially for the uninitiated. The platform I imagine would be similar to GitHub, but simpler to use with probably limited features to start. Right now, if you have a link to someone’s activity, you can go to it, Copy and Edit, and you’re off. That to me is like forking a repository in GitHub. However, it’s very hard to see the entire history or what other projects forked from that same activity. Maybe I’m replicating work someone else did, but I would never know. Unless you happen across the link somewhere, like in the Facebook group or Twitter or one of the various workarounds that have been developed to create a searchable index of user-created activities, you won’t know who else built off that original activity.

I understand the desire for the Search box on the front page to only return high-quality results. Right now, that means vetted by the company. But as we know, a central authority like that won’t scale well. Tons of activities are being created, especially during the pandemic. They are naturally going to vary in quality and reusability. However, I propose that the community provides the feedback on quality, by metrics like how many times it’s been used with a class, how many stars or likes the activity has received, how many times it’s been “forked.” That way, we can still find everything without an external workaround, and it’s actually all there, instead of only the activities that people in the know have actively added to those external sites. Folks can still make activities private or shareable but not searchable, but it would unlock a level of collaboration that right now is hampered by not being able to find templates and examples without just asking the community directly.

This turned into a mix between a treatise and a list of feature requests. I don’t want to sound ungrateful, because the features that already exist in the produce are phenomenal and free. I think it’s important for all of us to think about what would really make it possible for any teacher along the spectrum of skill level in AB and CL to create or adapt activities for their classroom. We’ve seen in other settings what happens when people are empowered to build off each others’ work with little friction. I think that can happen with Desmos Classroom Activities, and I’m excited to see what else this community will create in time.

# The square root of a reflection?

What things are squares? This question often leads to interesting and strange new worlds in mathematics.

We begin our story with whole numbers. If the whole universe you can consider is whole numbers, then the only squares are the “perfect squares” which we obtain by finding the area of a square with whole number sides.

Being unsatisfied with only some of our numbers being squares leads to a new kind of number: the irrational square roots of whole numbers which are not perfect squares.

So in the realm of real numbers, we have that to be a square it is necessary and sufficient to be non-negative. To have all real numbers be squares, we must again extend to complex numbers. For numbers, the story stops there. All complex numbers are squares. But is that the end of the story?

Certainly not. The question of squares in finite fields, for example, leads to the beautiful result of quadratic residues. But I want to consider squares under another operation: composition.

We can think about numbers as corresponding to the action of multiplying by that number. So 2 represents the function $f(x)=2x$. Then the square root of that action is another action, which we call $\sqrt{2}$, which when repeated twice, is the same action as 2.

Given a function $f:A \rightarrow A$, does there exist a function $r: A \rightarrow A$ such that $r \circ r = f$?

In general, this functional equation could be difficult to solve, so let’s consider the case when A is the plane. Given a transformation $F:\mathbb{R}^2 \rightarrow \mathbb{R}^2$, can you find another transformation which is its “square root.” i.e. $T(T(x,y))=F(x,y)$

The first question, is what if F is a pure motion, such as a translation, rotation, reflection, or dilation.

If F is a pure translation by a vector $v$, this is easy: T should be translation by a vector $\frac{v}{2}$. Similarly, if F is a pure rotation, then simply by rotating by half the angle, you obtain T.

Dilation yields the original square root: the original F is a dilation with scale factor $s$, then T should be dilation at the same center, with scale factor $\sqrt{s}$. In fact, these three actions can all be expressed as operations on complex numbers. Addition yields translation; multiplication, dilation and rotation.

Here’s the interesting one: what’s the square root (with respect to composition) of a reflection?

One answer is that it doesn’t exist. In order to make the question well-posed, we must specify what kind of thing T must be. If T is required to be a similarity transformation, then it suffices to consider the determinant. By the equation $det(T^2)=det(T)^2=det(F)=-1$, it’s clear that no real value of the determinant of T will be possible.

But what if T is not necessarily a similarity transformation, but some other more exotic function of the plane? I don’t know the answer to this question, but my suspicion is that it is not possible for a reasonable function. I believe that the above argument extends to smooth maps via the total derivative.

Intuitively, though, we can imagine “rotating” the plane 90 degrees along the reflection axis, which repeated twice would give the original reflection. This of course, means T is no longer a function on $\mathbb{R}^2$. But by stretching the target, we can make a sensible choice for this “square root of a reflection”

If the original F was reflection in the y-axis, we could represent this rotation using a complex matrix: $T=\begin{bmatrix} i & 0\\ 0 & 1 \end{bmatrix}$. Then $T^2 = \begin{bmatrix} -1 & 0\\ 0 & 1 \end{bmatrix}$, which is indeed a reflection across the y-axis. T is now a function from $\mathbb{C}^2$ to itself. But since it leaves the second copy of the complex numbers fixed, we can visualize the action has happening in 3-space, where the first complex coordinate is represented by the xz-plane and the real part of second coordinate is represented by the y-axis.

# Exploding Dots, Spy Codes and Minicomputers

When I was in fourth and fifth grade, my school used a math curriculum called “Comprehensive School Math Program” (CSMP). CSMP is one of the infamous “New Math” curricula developed in the 1960s. I expect I was one of the last few classes to use this curriculum.

I had occasion to revisit the CSMP materials after becoming fascinated by the phenomenon of Exploding Dots. In particular, I was struck by its similarity to some of the representations or “languages” of CSMP.

There are two points of convergence between Exploding Dots and CSMP: the abacus and the minicomputer. (It appears there is not a direct lineage according to statements by James Tanton on Twitter). The abacus is essentially isomorphic to Exploding Dots, while the minicomputer is related, but used much more thoroughly in CSMP.

The first is a representation in CSMP called the “abacus,” which comes in different forms (bases). For example, this is a task from the first semester of fifth grade:

This task shows addition of fractions with unlike denominators. At the time of writing, while Exploding Dots contains a decimal experience, and there is a discussion of division on other machines, no equivalent to the above task seems to be included. The process of “trading” or “exploding/unexploding” is equivalent in the two systems. The abacus uses a grey bar instead of a point to separate the ones from the negative powered place values. This has the advantage of not needing the word “decimal,” for numbers in other bases, which is a bit of a contradiction. (Dozenal advocates use the semicolon as the separator to distinguish base 12 numerals)

CSMP introduces the ternary abacus with a very Cold War story about spies and sending secret codes.

This script has an interesting interpretation of base 3 numbers: as functions from a finite set to a set of 3 elements. The “encoding” of the function is essentially the conversion process from base 3 to base 10, while “decoding” is the reverse.

The second representation that has similarities to Exploding Dots is the minicomputer. This is actually the representation that I recalled from my elementary days, as it was used consistently in the curriculum, whereas abacuses are more occasional.

The Papy Minicomputer is a chimera: a blend of base 2 and base 10. It is often introduced using the Cuisinaire rod colors. Checkers on each box are worth that value.

A single minicomputer is base 2. But after you continue, the values increase as base 10. The values of the second set of boxes is 10,20,40,80.

For example, the value of the above on the minicomputer is 800 + 20 + 4 + 1 = 825

The interesting thing about the minicomputer is that there are multiple correct ways of representing a single number, without using multiple checkers on a space. Similar to “antidots” in Exploding Dots, there exist negative checkers (notated with the “hat” symbol). Minicomputers also allow checkers with different values.

The archive of CSMP materials can be found at http://stern.buffalostate.edu/ Of particular interest is the videos of Frédérique Papy teaching children using the minicomputer and other languages of CSMP, found on this page: http://stern.buffalostate.edu/Movies/index.html

# The Quadrilateral Zoo: Why Trapezoids Don’t Belong

I’m a resident teacher this year, and I’m working alongside an experienced teacher in a 10th grade Geometry class. During the unit where we discussed polygons and their properties, I came across this definition in the textbook:

A kite is a quadrilateral with two pairs of consecutive sides congruent and no opposite sides congruent

Pearson Geometry Common Core

This definition seemed…off. Why have that second condition?

I think the idea is to exclude the case where 3 sides are all congruent to each other and the other side is not.

Ceci n’est pas un cerf-volant

But why not just say “two disjoint sets of consecutive sides congruent” instead of “no opposite sides congruent”? The problem is that according to the Pearson definition, a rhombus is not a kite﻿

Kite or no kite?

There is a famous debate in the definition of trapezoid, which is whether to use the exclusive or inclusive definition.

A trapezoid is a quadrilateral with at least one pair of opposite parallel sides

Inclusive definition

A trapezoid is a quadrilateral with exactly one pair of opposite parallel sides

Exclusive definition

The inclusive definition implies that a parallelogram is a trapezoid. In other words, the set of parallelograms is included within the set of trapezoids. And the inverse holds for the exclusive definition.

I decided to scientifically study this question, so turned to that time-honored rigorous methodology of…The Twitter Poll

No consensus apparently.

Many geometry books and educational resources have some kind of comprehensive picture of the classification of quadrilaterals. For example, the one from Wikipedia looks like this:

I do like this diagram, if only for the combination of a Venn diagram and actual examples of the quadrilaterals themselves. I notice two things: that the author is using inclusive definitions for kite and trapezoid, and the absence of the isosceles trapezoid.

Ready for it? Here’s my diagram:

What do you notice or wonder?

I arranged this diagram specifically to correspond to the subgroup lattice for the dihedral group of order 8.

Huh?

Basically, I’m classifying the quadrilaterals based on what portion of the full set of symmetries of a square that they exhibit.

Here’s the subgroup lattice diagram:

The notation used in this diagram is that “e” is the identity, “a” is a rotation by 90 degrees, and “x” is a reflection through two vertices, which fixes a pair of vertices and exchanges the other pair. There are two differences between this lattice and the one for quadrilaterals. First, there are two different subgroups of order two each corresponding to the kite and isosceles trapezoid. This is because the reflection symmetry that each exhibits can also be characterized as the other reflection that either goes through vertices or midpoints for the kite or isosceles trapezoid respectively. These two reflections are related by the rotation by 180 degrees, a.k.a. $a^2$. The other difference is that there is a cyclic subgroup of order 4 generated by a which doesn’t have a corresponding invariant set of quadrilaterals. This is because as soon as a quadrilateral has a 90 degree rotation symmetry, it is automatically a square. The reflections come for free!

So those teachers out there are probably asking themselves: where does this debate fit in the curriculum? Well, consider:

CCSS.MATH.CONTENT.HSG.CO.A.3
Given a rectangle, parallelogram, trapezoid, or regular polygon, describe the rotations and reflections that carry it onto itself.

Common Core State Standards for Mathematics

Ok, the standard doesn’t mention kites or rhombuses, which is a bit strange, but it still is clearly trying to get at a symmetry viewpoint of these shapes. In fact, the CCSSM has a thoroughly transformational perspective on geometry. The equivalence relations of congruence and similarity are consistently based in the transformation groups of rigid motions and dilations.

Interestingly, the standard says “trapezoid” even though a generic trapezoid has no symmetries at all.

In fact, what is up with trapezoids? As we saw, there’s some debate about the exact definition, but it always involves this parallel opposite sides idea. But we don’t need parallelism to define any of the other quadrilaterals. In fact, this points to a deeper fact: trapezoids don’t even have to exist! In spherical geometry, there are no parallel lines. So therefore, there are no trapezoids in spherical geometry.

Except, there are isosceles trapezoids. Because if we define an isosceles trapezoid as a quadrilateral with a reflection symmetry through the midpoints of opposite sides, then it exists perfectly well in spherical geometry. So here’s my hot take:

Isosceles Trapezoids are a more natural subset of quadrilaterals than Trapezoids.

When I say natural, I mean that it applies in a more general context, and fits more neatly in the symmetry classification. I consider the symmetry classification to be more consistent with the modern, transformational geometrical understanding than a classification based on sides and angles.

My friend Doug O’Roark pointed out that Zalman Usiskin has written an entire book on this subject, The Classification of Quadrilaterals : A Study of Definition. So this is not the end of this story. But for now, my current opinions are that trapezoids are weird, inclusive definitions are just better, and symmetry is a powerful and modern way to look at quadrilateral classification.

# Aggregating Bi-variate Data in Desmos Activity Builder

I was creating an activity builder adaptation of a 3-Act plan called “Gas Station Ripoff” and I had a need to aggregate bivariate data across the whole class. [Original here. My version here.]

The only problem: aggregate only works on lists of numbers.

My work around was to add the following code:

What’s going on here? Well, “pump1point” is a mathematical input box. I get the latex content, and then parse this as an ordered pair object. Then, I get the x value (first coordinate). I then call the numericValue function so that aggregate can accept it.

Ultimately, what happens is I have a list called $G_1$ which contains the x-values for all the students. I do the same thing for the y-values, getting a list called $P_1$.

The advantage of doing it this way is that the CL eats the whole input at once, which means that student responses remain coupled, and in order when they are aggregated.

The final stage is to graph the list of points. After initializing each list, I simply put the following in the expression list of the graph:

The only drawback to this method is that students have to be precise about how they enter the data. It must be entered in the correct order. I’m not sure how robust parseOrderedPair is when there are missing or extra parentheses.

Please let me know if you find this useful, and comment with any questions. I am still learning the Computational Layer, so feedback from experts is appreciated.

# Hello World

Here’s what you will find on this blog:

1. A commitment to anti-racism and social justice.
2. Reflections on being a new(ish) high school math teacher
3. My thoughts about educational technology (GGB, Desmos, and TI most frequently)
4. Free resources such as technology activities I create and lesson materials I have written
5. Mathematics I am exploring as a learner (expository writing as an exercise to help me learn)