However, from time to time, a truly complex problem arises, and the available tooling proves insufficient. In such cases, a little theory can save the day! Today, by way of illustration, I’m going to tell you the story of a mysterious language whose protagonists are the Directorate General of Public Finance (Direction Générale des Finances Publiques or DGFiP) and automatic theorem provers. We will then see how we can use automatic theorem provers to study the tax code and how, more generally, the law can be expressed algorithmically. This story led to the Verifisc project, and paves the way for a reappropriation of the complexity of the French tax system using tools that are accessible to everyone.
Calculating your tax rate: a democratic concern ?
In parallel with my doctoral work, I try to keep an eye on potential opportunities to apply my field of research to new problems. Hence in January 2019, I came across a lecture given by the law professor Sarah Lawsky at the POPL conference in 2018 with an intriguing title: “Formal Methods and the Law”. By reading their articles, I realized that some parts of the act describe very specific processes, sometimes with the help of mathematical calculations. These were simply algorithms! More specifically, Lawsky talked about certain provisions of the American Tax Code and described them with the help of logical formulae.
I then questioned myself on the usefulness of such an approach: how would the formalization of the tax code help us in going beyond what we already know? As Lawksy points out, it would clarify certain legislative provisions with ambiguous intent. But the added value of this clarification, which is admittedly important from a legal point of view, left me unsatisfied. I wondered then if the formalization might help us better understand how the our yearly tax contributions are calculated.
Some work on how one’s tax contribution is calculated has been done. There is already a simulator to calculate income tax from a tax return. INSEE also produces numerous studies. studying the effect of income tax and allowances on actual income households. The OpenFisca initiative, led by Etalab, even offers a full implementation of the French socio-fiscal system for economists who conduct such studies.
However, in all these works, one has to start from actual data corresponding to to fiscal households to calculate quantities (taxes, allowances, etc). The starting data the scope of the study, and tax secrecy makes access to fiscal data of households very difficults. In such a situation, how does one go about studying the effects, in advance, of a fiscal reform? The site LexImpact, built on OpenFisca, allows for quickly visualizing the effects of a change in the parameters of the legislation in place on test cases. MPs and parliamentary staff have also the possibility of simulating their reform on a small sample (a few thousand) of households tax data from actual data.
But is a sample of a few thousand or tens of thousands of household tax shelters sufficient to observe the effects of a tax reform in all cases? Let’s do a back-of-the-envelope estimate: if we count 20 income brackets to be considered for households (from 1600 € to 1700 €, from 1700 € to 1800 €, etc), the 3 cases of marital status (single, couple, coliving), the number of dependants between 0 and 5, the possibility for all or some of these people to be disabled, we arrive at several thousands of different cases to study! Moreover, on a small sample, the majority of the cases data will be concentrated in over-represented “average” categories, reducing further the chances of having an optimal coverage of all possible cases by the dataset. And all this does not take into account the study of the tax effects on the same tax bracket, which changes from year to year, with variations in income, changes in family circumstances, etc.
Based on this observation, I realized that the problem of studying the tax legislation and its effects on the population in all its diversity is hugely important. However, an analytical tool commensurate with the impact it can have on the lives of our fellow citizens simply doesn’t exist. A poor interaction between a tax bracket and the threshold of a allowance can have dramatic consequences on a family’s life, with potential losses of up to several hundred euros per month. Do these hypothetical unfortunate situations really exist? If so, how can they be systematically identified in order to correct them?
The complexity of tax legislation, a product of a historical construction, but also of the need to respond to a wide variety of real-life situations, risks eluding its creators like a legal Frankestein. Visibility of the effects of taxation and benefits in general is also a democratic issue, as evidenced by the recent debate over the test cases of the Delevoye Report on pension reform. Pending a complete redesign as envisioned by Landais, Piketty and Saez in 2011, I decided to see what my field of research, formal methods, could contribute to this problem.
An automatically proven socio-fiscal model
The study of taxation and its benefits is an intrinsically mathematical, when laid out the right way. Indeed, we assume in all that follows to possess relevant fiscal information for each household like the content of an income tax return or forms from online simulators, all data that allows us to assess a household’s entitlement to various aids and allowances. The amount of money due (on account of tax) or received (by way of an allowance) is then only a mere mathematical function of the characteristics of the household. In order to study this function, I decided to encode it in an SMT solver called Z3.
An SMT solver is a particular instance of what is called a automatic theorem prover. These automatic provers work as follows:
- we encode a problem as constraints on variables boolean or integer ( and for example) ;
- the prover is then asked if there are values for the problem such that all constraints are met;
- if the answer is yes, then the prover returns such an assignment of variables, if not, then the prover guarantees that these constraints cannot be met.
In our case, the variables are the characteristics of the household: number of children, income, etc. Constraints are used to model two things. First, the rules tax and allowance calculations. For example, “if the household is a couple with two non-disabled children, then the family quotient is equal to 3”. Second, constraints are also used to ask the questions we want to ask our little socio-fiscal model. For instance, if we look for households paying taxes greater than or equal to 30% of their income, we can simply add this constraint to the model. The SMT solver will then tell us if such a household exists, and if so, will give us an example of it.
Let’s get straight to the point with a problem that has consequences for many households: marginal taxation and threshold effects. Let us imagine that an individual in the household sees their salary increase from one year to the next. They will then pay more income tax, but will also see certain allowances diminish. The question is whether there are any situations in which the household’s actual income decreases, even though the wage increases? Specifically, we are looking for an effective marginal rate greater than 100 %.
The SMT solver was able to answer this question: yes! But more interestingly, this positive response is accompanied by an example that satisfies all the constraints. This example does not correspond to actual data from French households available to the solver. It corresponds to a hypothetical situation deduced from the study of mathematical functions for the calculation of taxes and other welfare allowances. Although hypothetical, these examples are no less realistic and highlight legislative blind spots, an unfortunate result of the interaction between several separately designed pieces of legislation.
Here, we have a couple living-in wherein the first individual earns a net salary of €1768.55, the second individual doesn’t work. The couple has two children of 16 years old, enrolled in high school, and who are dependent on the second individual. The family rents a dwelling in zone III (outside urban areas with more than 100,000 inhabitants) for €784.09 per month. Let’s assume that the income of the first individual increases by €40 per month (or 2.2% of salary) the following year. For this household, the amounts of income tax and various allowances are as follows before and after the increase (not taking into account the fact that some allowances are calculated based on income earned in the year whereas the tax contribution is calculated on income earned in year ) :
|Annual Salary||€21 223,00||€21 703,00||€480,00|
|Revenu fiscal de référence||€19 101,00||€19 533,00||€432,00|
|High-school scholarship||€876,00||€0,00||- €876,00|
|Personalized Housing Allowance||€110,00||€99,00||- €11,00|
|Working bonus||€387,00||€382,00||- €5,00|
|Net annual income||€29 607,00||€28 884,00||- €723,00|
We can see here that instead of earning €40 more per month, the household loses €60 per month compared with the previous situation. This loss is 75% explained by the loss of the high school scholarship for both children, which has a very brutal threshold effect multiplied by the number of dependants benefiting from it.
So what does this example tell us? First, that a linear threshold similar to the one used for family allowances should be generalized to all allowances in order to avoid this kind of situation. Second, that it’s a very specific combination of factors that triggers this situation: two kids in high school at the same time, and a net salary level of around €1750. The fact that the couple is living-in, which adds a tax increase explaining 11% of the loss of money, is not the central factor but still concerns 20% of the French.
This example could also have been discovered from a classic study on actual data, but the sample size required to randomly encounter such a situation is quite large. One of the strengths of this new method of analysis is therefore to effectively and reliably discover, and without any secret data the undesirable borderline cases induced by the socio-fiscal system.
The SMT solver could also be used to find the exact linear slope to give to the stock exchange to avoid too high a marginal rate. But what does “too high”? 70 % ? More? Less? Moreover, the threshold effect is multiplied here by the number of children: to maintain a low marginal rate around the threshold, a linear slope will be required which decreases the less one wishes to cover the larger families. We see here that there is scope for a lot of political arbitration: the answers to the question “what is acceptable?” are highly subjective and the question itself is rife with tensions. However, once the political decision has been made, the SMT solver ensures whether the formula anr rules used for the calculation are in agreement with the principles decided by the legislator.
I have only presented one method of getting results from the model encoded in the SMT solver. In fact, it is possible to ask any question of the form “Is there a situation such that… ?”. Some examples of possible applications, of great interest from a public policy evaluation point of view :
- “Is it possible for a household to win/lose more than € as a result of this year’s tax reform?”
- “Is it possible to increase the second tax bracket rate by more than 5 points without any household losing more than 200€ a year?”
- Assuming you’ve encoded a histogram of the income distribution in the SMT solver, “Is it possible to play around with the tax brackets so as to gain 5 billion more per year in tax receipts without any household seeing their tax contributions going up by more than 3%”
Is this Science or is this fiction or science-fiction? The automatic theorem prover is not magic. More complex the problem is, the more time and resources it takes to solve it. My current prototype can already consume more than 100 GB of RAM and run over several days; the issue of scaling up to questions more complicated therefore represents a real technical challenge. But the formal methods have more than one trick up their sleeve, and they’re going to help us again to present a SMT solver for a problem reduced to its core and free from superfluous complexity.
Towards a formal implementation for calculating taxes
It is already possible to do a lot with a small prototype like the one I developed. Ideally, we would like to have the function that calculates the tax in all cases, for all types of income and status, with all the tax niches. However, there is already a computer program that does this: the one used by the Directorate General of Public Finance (DGFiP) to calculate your tax contribution!
Since the the passge of law 2016-1321, 7 October 2016 calling for France to become “a digital Republic”, the source code used by a section of the French administration is considered to be an administrative document subject to the right of access and public reuse. The code implementing the calculation of taxes was therefore published by the DGFiP in 2016, during a hackathon which saw some interesting ideas emerge. You can inspect the code at the official repository of the CFPB. So can we just use it then? Not really.
The code is written in a special programming language, called “M” for “Macro-language”. It is a language developed by and for the DGFiP to write the implementation of the tax code. To date, however, the DGFiP has not made available to the public the tools it uses to execute this M code. Is the code unusable? In 2016, Etalab published a parser for the M language, capable of translating the M source code into an abstract syntax tree that can be interpreted in Python. I was able to reproduce the studies done in 2016 during the hackathon using this parser myself, but I also noticed some problems. Firstly, the tax amount calculated with the parser was sometimes wrong, especially when the final amount had to take into account the limit of a tax benefit or the family quotient. Secondly, and this is far more problematic, this parser was not written with the aim of giving a precise semantics to the M language: the M operators of the abstract syntax tree are simply replaced at runtime by corresponding Python operators.
Why isn’t that enough? We have to remember our goal: beyond the simple execution of the M code, we also want to translate it into an automatic prover to replicate the results of our little prototype in the general case. To do this, we need to know exactly how M programs run in all possible cases. This knowledge has a precise name in the study of programming languages; it involves endowing the M language with semantics. So that’s what I set out to do, using a mixture of reverse engineering and additional information given to me by the DGFiP, with whom I was able to exchange regularly for several months. This work bore two results:
- a formal semantics of the M language, presented in a research article;
- a compiler for the M language, Mlang, based on this semantics.
I would like to thank :
- Raphaël Monat, a fellow doctroal student at LiP6 and co-author of Mlang;
- Christophe Gaie and all his collaborators from the SI-1E team of the DGFiP, who have responded willingly to my requests for information on the M language
Why is it important for the compiler to be based on a semantic formal? Because this semantics will allow us to validate the translations of the language M to other programming languages. For example, Mlang allows us to statically generate a Python function that reproduces the tax calculation as it is done in the official DGFiP simulator. What’s the difference with the simulator previously written by Etalab?
I realized that the M language had a special value
undefined. In Etalab’s parser,
undefined is translated to 0. However, the work of realizing formal semantics has revealed that
undefined does not behave like 0 in all cases: for example, the comparison with
undefined always gives
undefined (and not the result of a similar comparison with 0). These small details make the compiler more robust, and increase our confidence about the files generated from M code using Mlang. Specifically, it reduces the risk that the generated program will run differently than the M code.
At the time this post is published, however, the code generated by Mlang is still incorrect. Indeed, a side effect of this formalisation and validation work was to discover that the M code published by the DGFiP does not contain all the information needed to replicate the tax calculation. Technically, M code is called several times with particular values for certain variables. This technical subtlety had not been detected at the time of the hackathon and the first publication of the code. I was able to inform the DGFiP, which is currently implementing means to enable me to retrieve in their system the information concerning these multiple calls of the M code with different parameters. Once this information has been retrieved, I will be able to complete the formalization work and validate Mlang on DGFiP’s official test sets.
Conclusion: towards a general implementation of the tax code?
Another interesting application would be the translation of M code into macroeconomic simulation languages such as TROLL or SAS. Indeed, these languages are used by INSEE in order to produce models of the French economy which require mathematical expressions of income tax code. These models could therefore be updated each year by retrieving the new version of the M code compiled into TROLL or SAS, instead of having to reflect changes in legislation in TROLL or SAS code as well.
Finally, Mlang will be able to improve our prototype of an “automatic tax code prover”. Before translating it to an SMT solver, it is necessary to reduce the mathematical complexity of the function to its essential bits to limit the solver’s resource consumption. The framework of the Mlang compiler and the formal semantics of M allow us to perform reliable optimizations that will simplify the program without changing its meaning, with or without additional conditions. Beyond classical optimizations such as dead code removal, we can also use an abstract interpretation tool such as MOPSA to perform a finer analysis of the program. Raphaël Monat and I hope that, combined with strategies for dividing queries into subproblems, this optimization of the M code will allow us to scale up our prototype of automatic tax code prover.
Beyond technical understanding, I think there are several lessons to be learned from this story. Firstly, free access to research and code allows synergies and unusual initiatives to develop. The DGFiP is the first administration in the world to publish its tax calculation implementation in this way, and I think this project will quickly bring benefits to the administration. All my code is also published under an open-source license, because it is important that a tool that allows for the fine and automated analysis of French taxation be accessible to everyone, and not reserved for a private actor who would use it for their own profit. Secondly, I think that this project shows once again that innovation often comes from fundamental research carried out over the long term. The tools used here (as proven by Z3) have solid theoretical foundations established over decades, and I am of course standing on the shoulders of giants. I hope that this example of the application of formal methods can draw some attention to this not very marketable field in these times of the race for “artificial intelligence”. Thirdly and finally, my intention with this project is also to show that innovation can also directly benefit the public sector. I therefore welcome the existence of the entrepreneurs of general interest programme, of which the LexImpact initiative is a part. In my opinion, such a programme could only benefit from being closer to applied research being carried out for general interest in French laboratories. While the prospect of a career in public research seems so bleak for many highly qualified young people, the idea of being able to use one’s knowledge to improve government action would seem to me to be a powerful motivation to avoid the brain drain to the private sector whose activities, especially in the field of technology, sometimes lack meaning.