Even under the fairly restrictive market restrictions given in my previous post, we can generate instances of school-choice markets that exhibit chaotic dynamics. Here is an example. I let tâtonnement process run for 8000 iterations and plotted the final 7000 cutoffs. The blue and red points indicate even and odd iterations; the behavior is bounded and quasiperiodic but has no obvious symmetry. Here is the code.

### Dynamic Admissions Markets

School-choice markets are like any economic market in that they are characterized by prices (score cutoffs), supply constraints (school capacity), and consumer demand (student interest). In general, a tâtonnement or price-adjustment process converges quite well for school-choice markets when students’ preferences follow a multinomial logit choice model and their scores at each school are independent or positively correlated.

A unique challenge in school-choice is how to model the preferences of schools over students and vice-versa. In this example, I use a standard model for student preferences known as the multinomial logit choice model. To express the fact that student preferability is imperfectly correlated between schools, I assume schools score students using a convex combination of orthogonal dimensions of student quality. For example, each student may have a language score and a math score, and the school’s decision is how much weight to place on each of these and what minimum score it should require for admission.

(In reality, if the language and math scores correlate, we can use principle component analysis to extract the orthogonal components of the score vector, so the only loss of generality here is that we have limited the score dimensions to two. The number of dimensions used can be increased, but at considerable computation cost.)

The first pane of the animation visualizes the possible sets of admitted schools and associated probabilities. A student who scores high on both tests will be admitted to both schools, while one who scores high on the math test but low on the language test may only get into the Antarctic Institute of Technology, and so on. Each school starts out with a random score cutoff and adjusts it over time to make the number of students who enroll agree with its capacity—when this occurs, we say the market is in equilibrium.

At equilibrium, the total percentage of students who receive no admissions offers (the white area of the graph) must equal 0.4, which is one minus the sum of the schools’ capacities (expressed as a fraction of the number of students).

The second pane shows the motion of the cutoff and demand vector over time. The gray pointers indicate the excess demand at those cutoffs; this graph thus shows that the equilibrium is stable, because all the integral curves lead to it. I configured the step parameter in this example so that it converges to the equilibrium gradually, but with better parameters we can use the tâtonnement process to find the equilibrium quite quickly, even in large markets with more schools and tests.

You can find this example in self-contained form here on GitHub. A collection of equilibrium-finding algorithms is part of my DeferredAcceptance Julia package.

### Stable Matching on Planet Money

*Planet Money,* an NPR show about economics, recently ran an episode entitled “The Marriage Pact” that deals precisely with my research topic. It’s a great episode that discusses both the basic ideas behind stable assignment as well as its applications in organ donation, job placement, and (my area of focus) school choice.

The episode begins with an interview with a Stanford econ student who designed a marriage market for his peers and managed to get 4111 of them to sign up for it. What a cool project! The student makes a few minor misstatements about the Gale–Shapley proposal algorithm that *Planet Money* leaves uncorrected. In this post, I want to offer a few corrections, not just because I can, but because in my opinion these marginal details are what make stable assignment an interesting and profitable research topic.

The interviewee says that because the proposal algorithm produces a stable matching, it is guaranteed to provide participants with their “best” match. But it turns out that even when the participants in the proposal algorithm have strict preferences, there can actually be more than one set of stable matchings, and which stable matching you get depends on which form of the algorithm you use.

If you run the algorithm in the male-proposing form, you get a match that is simultaneously male-optimal and female-pessimal. This means that among all of the women Fido could be stably matched with, the male-proposing algorithm is guaranteed to pair Fido with his first choice, and to pair Fido’s mate with the male she likes *least* among the males she could stably match with.

Likewise, if you run the algorithm in the female-proposing form, you get a match that is female-optimal and male-pessimal.

In the episode, the hosts mention that the algorithm provides the participants with no incentive to lie about their preference order (notwithstanding misrepresenting basic facts about *themselves* like height). But an important corollary of the result above is that the male-proposing algorithm is actually *not* strategy-proof for the *female* participants—by lying, they may end up with a male better than their worst match.

I don’t know how the Stanford match works, but it’s likely that (for the straight participants) they used an extension of the Gale–Shapley algorithm developed in the 1980s that finds the “male–female optimal stable match”—a sort of compromise between the male- and female-proposing forms. This extended algorithm, while egalitarian, isn’t strategy proof for *either* group of participants, precisely because it gives both groups a slightly suboptimal match.

The good news it that in the one-to-one matching case with strict preferences, the differences between the male- and female-optimal matches are quite minimal (usually just a shuffling a few pairs around at the edges, if any), and the “strategy” you need to use to beat the market is not at all obvious.

But when we relax the assumption of strict preferences and start to work with many-to-one matching, as is the case in school choice, the set of stable matches becomes very large, and coming up with a good way to maximize participants’ outcomes while maintaining incentive compatibility is an interesting challenge. The dominant contenders in this space are randomized “tiebreaking” (lottery) mechanisms, including those used in school markets in New York, Boston, and elsewhere. I’ve run a bunch of computational experiments on these mechanisms, and you can see some fun graphs on the Github page for my Julia package DeferredAcceptance.

(I sent a version of this to the *Planet Money* team, but I’m not betting on a response.)

### Slides and Registration

I”ve gotten little bit ahead of myself on my seminar slides, so here is a compilation of what I have created so far. For the most part, they should stand on their own, but some degree of familiarity with the Gale–Shapley assignment process wouldn’t hurt (it isn’t hard to grasp).

My DeferredAcceptance package, now at version 0.3.1, was also accepted into the general Julia package registry.

### Winter Quarter Update

I have greatly expanded the functionality of DeferredAcceptance, my school-choice code library. New features include the nonatomic form of the DA algorithm, several new tiebreaking mechanisms, and further replications of recent experimental results from the literature on stable assignments. A couple of interesting research results of the past ten years include a tiebreaking mechanism that lets students name a “target school” where they will receive enhanced admissions priority, and a two-round dynamic *re*assignment mechanism that uses the spaced freed up by students who leave the market (e.g. to attend private school) to compensate for inequality in the primary assignment round.

I also coded a practice implementation of an interior-point method for solving linear programs. During winter break, our lab has been working through Jorge Nocedal and Stephen Wright’s *Numerical Optimization* textbook, and the section I was responsible for included the predictor-corrector algorithm given above.

During winter quarter, I completed a course in revenue management and pricing. For my final presentation in that class, I reviewed a paper that concerns which products should be presented to a customer in order to maximize revenue (assortment optimization):

One rewarding aspect of this class was the exposure to a wide range of models in choice theory, which allows us to express mathematically the processes consumers use when deciding which product to buy. So far, in my research in school choice, I have focused on the assignment aspect of the problem, which takes place after students’ preferences are revealed. But robust simulation of school-choice markets requires effective modeling of uncertainty and information asymmetry on the demand side.