Patrick Truchon's Web Portal

Posts Tagged ‘math/science’

Synthetic Biology

Posted by Patrick on November 30, 2012

Last week, Quirks and Quarks had a segment about synthetic biology [1]: a new branch of science whose goal is to design and construct new biological functions and systems not found in nature. [2]

The explicit assumption of this branch of science is that DNA is a kind of computing code.  Indeed, Canadian researcher Andrew Hessel says that DNA is a “tremendous medium for encoding information: it’s far more robust and compact than even electronic data storage, and it’s really the code of life.  So we’re looking at it through the lens of computing […], which I think is a remarkable shift. [1, 2:17]  Seen through this lens, these researchers want to reprogram living organisms to make them do new and useful things.  Imagine, for example, “bacteria that breath CO2 and pee straight diesel fuel” [1, 2:58].  These researchers believe that this technology could save the world.

Whether or not you get excited by the possibilities that we may finally live in harmony with nature (by controlling it even more drastically), two things concern me.  The first is implicitly outlined in one of Hessel’s comments:

“I think this is the most powerful technology we’ve ever made.  The only thing that I think compares to it is electronic computing.  And really, we’ve seen how electronic computing has revolutionized and continues to revolutionize the world.  I think this is even more powerful because now we’re talking about programming not electronic processors, but living processors.”  [1, 3:48]

This technology is the most powerful we’ve ever made… Are we wise enough to foresee all the consequences of reprogrammed organisms?  At one level, DNA works like computer code and new (and better programs) can be written, but biological organisms also interact with one another and evolve.  Do we seriously think we’re smart enough to understand all these interactions?  Taken as a whole, this new field would be orders of magnitudes more complex than the entire Internet, which is by no means simple.  This time though, programming “bugs” may be more dangerous than simple computer glitches.

Maybe you think that spreading FUD is not the most compelling line of reasoning.  After all, the same has been said about other fields of science before.  Nuclear physics was supposed to lead to global planetary destruction, and we’re still here.  Fair enough.  Maybe we are (or will become) smart enough…

My second concern is not so much about the technology itself but the legal infrastructure surrounding it: we live in a world where companies like Apple Inc. patent things like “rectangle with rounded corners”. [3]  Patents on software are just as ridiculous and detrimental for innovation since:

[They] block individuals from taking part in […] development and distribution […]  This may not seem relevant to most people but it’s the same as the freedom to write a book. Most people will never write a book, but some people will and society as a whole benefits from what is made by the few […]  [4]

If the evolution of synthetic biology is inevitable, I hope it doesn’t follow the insane route that commercial software and electronic devices have taken.  Exploring such a powerful science requires openness, collaboration, and governmental oversight, not secrecy and commercial control.  If we are going to engage in geo-engineering and massive biological reprogramming, the legal model of the Free Software Foundation [5] is probably the best place to start, if not the only one that will be safe and sustainable.


  1. Quirks and Quarks: Using DNA to Save the World,
  2. Wikipedia: Synthetic Biology,
  3. The Verge: Apple finally gets its patent on a rectangle with rounded corners,
  4. End Software Patents,
  5. Patrick Truchon, Free Software,

Posted in Uncategorized | Tagged: , , , | Leave a Comment »

Math Games: Simulators and Instruments

Posted by Patrick on September 6, 2012

I’m in the process of changing my mind about a topic I don’t know much about: the gamification of learning, in particular, the gamification of mathematics learning.

Here’s my preconceived idea about math games: it’s sugar coating.  There’s something you have to practice; it’s hard, and you’re not very interested. So to make it less painful, we’ll add points you can earn for each correct answer (or better yet, monsters you can defeat by factoring polynomials, with cool graphics and stuff).  Hopefully, you’ll want to sit in front of your computer 10 minutes longer than with your textbook to practice your math.  In my opinion:

  • The math in these games has nothing to do with the context of the game (which is often true of textbook questions too mind you [2]).
  • These games do nothing to foster internal motivation and appreciation of mathematics.
  • They focus on skills, not mathematical and conceptual thinking.
  • They are really just fancy worksheets with blinking lights and noise to keep you awake.

I’m realizing now that that idea is a bit of a Straw Man.  In a Webinar [1] he presented back in January, Keith Devlin (@profkeithdevlin) clarifies what math games have been, are, and can be.  He uses the analogy of a flight simulator, or a music instrument to convince us that well designed math games could be invaluable tools to help students investigate abstract ideas in a world that makes them more concrete.  He doesn’t want math games to replace instructions, instead he wants them to be a complimentary tool of discovery, where students can think mathematically without having to worry about the notation.

In one of his previous books, Devlin argues that what makes math hard is its level of abstraction.  The logic is often simpler than that of a soap opera. [3] Now to extrapolate a little bit from Devlin’s presentation, it seems to me that a good  way to teach mathematics would be to:

  1. Use well designed games to explore mathematical thinking and logic in a context that is intuitive and non-symbolic.
  2. Slowly introduce symbols and layers of abstraction.
  3. Practice on synthesizing these two aspects.
  4. Repeat with new concepts…

There’s a catch though, which Devlin mentions briefly: It makes no sense to test students on the second part if they are still on the first part.  Can you imagine if part of the assessment process was to have students play a game so we could see what they struggle with?


  1. Keith Devlin, Game-Based Learning Webinar Recording
  2. Dan Meyer, [PS] Critical Thinking,
  3. Keith Devlin, The Math Gene,

Posted in Uncategorized | Tagged: , | Leave a Comment »

Driving Me Nuts

Posted by Patrick on February 16, 2012

Yesterday, Peter (@polarisdotca) asked this question:

Why does tying knot in strip of paper form a regular pentagon? Why not 6, 7,…? Why regular? Anyone have intuitive explanation? #wcydwt [1]

Being a rock climber, I like knots; I DEPEND on knots!  Being a math and physics teacher, I like puzzles; I DEPEND on puzzles.  So naturally, this one peaked my interest.  Here’s what I’ve got so far:

The first step was to recreate the experiment, so I started by making a regular knot (actually called the “Overhand knot” [2]) with a strip of paper:

Then, I tried to flatten it as tightly as possible without breaking it:

It’s a little loose at the “exit points”, but we can easily imagine that the “ideal case” would indeed be a regular pentagon (regular because all sides are the same lengths; pentagon because it has five sides).  So now: why is that?

Intuitively, I think there can only be five sides because there are three folds and two exit points, for a total of five.  That’s how the knot is made, by folding the rope three times onto itself:

Here’s what it looks like when unfolded:

Three of the sides are from folding, and two of the sides are just the edge of the strip of paper, which correspond to the exit points.

Why does it have to be regular though?  Is it because that’s the most compact configuration?  Is this shape the solution to some optimization problem (like greatest ratio of SurfaceArea-to-Perimeter, which minimizes some energy function or something…)

My next question was: how would a Figure-Eight knot [3] behave?  I was not only interested in this knot because I probably use it more often than the overhand knot, but because my trick to make it is to start it like an overhand knot, then finish it an extra half turn later (ie. that would add an extra fold in the strip of paper!)  Could this lead to a 6-sided figure?
Here it is loose:

And flattened:

Yeap: four folds and two exit points.  Here’s the weird thing though: one of those exit point is not even “connected” to the other sides:

Why is that?!  Also if I could make it perfectly, would it also be a regular polygon? or is it intrinsically elongated?  Thanks Peter!  This puzzle is driving me nuts!


  1. Peter Newbury’s Tweet:
  2. Animated Knots, Overhand Knot,
  3. Animated Knots, Figure 8 Bend,

Posted in Uncategorized | Tagged: | 1 Comment »

CO2 Levels (a depressing story)

Posted by Patrick on November 27, 2011

A few days ago, I listened to an ABC radio podcast on All in the Mind entitled “The case for moral enhancement”. [1] I was expecting the ethical minefield of eugenics to be discussed (which it was), but I was surprised by the turn of the conversation towards the end: 0ne of the reasons why we’d want to enhance our moral compass is because we didn’t evolve to deal with problems that affect the entire population of the planet.  In particular, one of the professors grimly said that “it’s wishful thinking to think that people are going to voluntarily deal with climate change”.  Heavy stuff!

Today it was CBC radio’s Quirks and Quarks turn to tackle the issue of climate change. [2] Again, it was nothing short of depressing.  Very…  Depressing…  One of the guests said that our inability to deal with the problem not only means that we’ll face catastrophic repercussions, but it also says something pretty grim about ourselves: “Can we not deal with an ethical issue about the lives of billions of people around this planet?”

Because I like to understand the information contained in graphs, I clicked on the one posted on the Quirks page [2], which led me to its source on wikipedia [3], which lead me to the source of the raw data [4].  I decided to import that data into a spreadsheet to see what information I could extract from it.

Using two simple functions, and a method called “least squares” [5] to scale them properly, I managed to find the proper parameters that model the CO2 concentration as a function of time.  Visually, the orange graph (the model) follows the blue graph (the data) pretty well, so the model I found is pretty good (within that range of time anyways).

I found the equation of the model (the orange graph) to be:

It looks complicated, but there’s basically three pieces to this function, each with their own particular meaning.

The first part is just the number 270.  What it means is that if we go back in time by more than a few hundred years, the average CO2 concentration in the atmosphere would have been around 270 ppmv (compare that to today’s 390 ppmv !)

The second part is responsible for the oscillation of the concentration due to seasons.  The number 2.7 in front of the sine function means that the concentration increases from its average value by 2.7 ppmv in the winter and decreases by 2.7 ppmv in the summer.  So the total variation (of about 5.4 ppmv) is pretty small (compared to the average increase).

The third part is what we’re responsible for.  It says that the difference in CO2 from the ancient average of 270 ppmv will double every 37 years.  This is a bit tricky so here it is again: if you look at the concentration of CO2 today and subtract that from what it was hundreds of years ago, that difference will double in 37 years time.  For example:

  • The concentration was around 315 ppmv in 1958, which is a difference of 45 ppmv from 270 ppmv.
  • 37 years later (in 1995), the concentration was 360 ppmv, which is a difference of 90 ppmv from 270 ppmv (double the previous difference of 45 ppmv)
  • Another 37 years later (in 2032), the concentration should be (if the trend continues) 450 ppmv, because there should be a difference of 180 ppmv from 270 ppmv ( double the previous difference of 90 ppmv)
  • And in 2069? 720 ppmv, because it’ll be 360 ppmv more than 270 ppmv…

So according to this model, if the trend continues (ie, we keep doing what we’re doing now), the atmosphere will reach levels of CO2 comparable to that of the Eocene–Oligocene extinction event 34 million years ago (which were around 760 ppmv) [3] in a time scale of a few 37-year periods!  And I thought the podcasts were depressing…  The next graph shows this extrapolation in both direction.  The model (in orange) is graphed (without the seasonal variations) between 1750 and 2100 with the actual data (in blue).  The future looks completely crazy, but other data suggest that the past is actually pretty spot on. [3]

Now, to be fair, the assumption that “we keep doing what we’re doing now” implies at least two things that are very unlikely:

  1. Our population will continue to grow exponentially.
  2. Our resources of fossil fuels will continue to match our growing demands.

In reality, we’ll either find ways to turn this around, or we’ll suffer from other problems that will curb our population explosion and our ability to consume so much fossil full.  One thing is certain: we can’t let that orange curve go that high.


  1. All in the Mind, The case for moral enhancement,
  2. Quirks and Quarks, The Rocky Road to Durban,
  3. Wikipedia, Mauna Loa Carbon Dioxide-en.svg
  5. Wikipedia, Least Squares,

Posted in Uncategorized | Tagged: , | 1 Comment »

Conceptualizing Physics

Posted by Patrick on June 27, 2011

The following two videos address one of the questions that I ponder the most: what are the best ways to help students understand concepts in mathematics and physics?  Although both speakers reach similar conclusions, they each reveal many other insights that are also very important.  Here are a few lessons that I take from each.

Derek Muller (@veritasium) shows that:

  • Students new to physics come with misconceptions they think are true (about the world of physics).
  • Because of this, they don’t pay their utmost attention to the videos (which might as well be traditional lectures).
  • Which causes them to think that what is being presented is the same as what they think.
  • So they don’t learn anything.
  • While getting more confident in their misconception.

But his interviews with the students also showed something else:

  • Students are bad at judging how much a video (or lecture) is helping them learn.

This part I found very interesting.  Indeed, the “clear” videos didn’t help them learn as much as the “confusing” ones.  Although Derek doesn’t make that leap, I think this applies equally well to traditional classroom lectures.  Further more, it also suggests that students’ evaluations of teachers are (at best) an incomplete metric of teachers effectiveness, if not a completely bad one.  Of course, it doesn’t mean that the way to help students is to be as confusing as possible, but now I’m wondering if the good feedback I tended to get about my teaching was such a good thing…

In essence, Derek says that for students to really learn physics, they have to engage and struggle with the concepts on their own terms.  Delivering information is not sufficient for learning.  Dr. Eric Mazur (@eric_mazur) also comes to the same conclusion but in the context of the lecture hall:

This time, Dr. Mazur breaks down learning into two parts [3]:

  1. Delivery of information
  2. Synthesis of information

Traditionally, classroom lectures have focused on the first part, but it is the second part that constitute true learning.  Thus, he assigns readings ahead of time (or finds other ways for students to get the information before they enter the classroom) so that students can spend more time in class synthesizing information instead of being passive recipients.

Dr. Mazur also reaches a second conclusion: Conceptual understanding leads to good problem solving abilities, but good problem solving abilities doesn’t necessarily implies conceptual understanding.  This strikes at the heart of traditional assessment methods.  Simply giving problems to solve doesn’t discriminate between those who understand what’s going on, and those who have memorized an algorithm they don’t really understand.

In my practice, I always try to emphasize the “why” of things over the “how” (mainly because I have a bad memory myself).  It’s encouraging to see research that validates that philosophy, and enlightening to see the various methods used by these inspiring educators.

  • Update: I added a reference relating to Howard Gardner that is very relevant to this post. [4]
  • Update 2: I added a reference to an article describing the results of team of researchers at UBC that supports what Dr. Mazur is doing. [5]


  1. Veritasium, Khan Academy and the Effectiveness of Science Videos ,
  2. Eric Mazur, Memorization or understanding: are we teaching the right thing?
  3. Mazur Group Publication, Peer Instruction: Making Science Engaging,
  4. The Daily Riff, Misconceptions About Learning & Teaching
  5. ScienceNOW, A Better Way to Teach?

Posted in Uncategorized | Tagged: , | Leave a Comment »