Fragments: Fragmentsurn:https-www-tfeb-org:-fragments-index-html2024-03-27T09:49:46ZMel Strideurn:https-www-tfeb-org:-fragments-2024-03-27-mel-stride2024-03-27T09:49:46Z2024-03-27T09:49:46ZTim Bradshaw
<p>I am unfortunate enough to have Mel Stride as my MP. On the 21st of March, 2024, <a href="https://www.msn.com/en-gb/health/other/mental-health-culture-has-gone-too-far-says-mel-stride/ar-BB1kfnHw">he said some really unpleasant things about mental health</a>. I was going to write to him, but there’s just no point: someone who can say what he said is not someone with whom it is useful to communicate. Below is the draft of what I wrote.</p>
<!-- more-->
<p>I was really impressed by what you said on the 21st March:</p>
<blockquote>
<p>There is a real risk now that we are labelling the normal ups and downs of human life as medical conditions which then actually serve to hold people back and, ultimately, drive up the benefit bill.</p></blockquote>
<p>That is … just an extremely stupid, arrogant and nasty thing to say.</p>
<p>You then went on to make some more noise about how doctors tend, after seeing people with mental health problems, to sign them off as sick. That is, you know, <em>doctors</em>: people who have done six or more years of very hard work to qualify and who are bound by medical ethics. As opposed to you: a person with a degree in the easy bits of three mostly-bullshit subjects.</p>
<p>And of course you would rather that the helots just struggle on until they fall off some cliff and die. That, after all, would make some number on a spreadsheet bigger and put money into the pockets of your corporate sponsors: the people you actually work for. Not to mention conveniently eliminating the unproductive: <em>Arbeit macht frei</em> as someone once wrote.</p>
<p>Well I’ve walked along the edge of that cliff most of my life, quite literally on several occasions. I have never been diagnosed with anything, nor in fact sought medical help: I hate to think what people who have must have gone through. And however little I have contributed to society it is more than you ever will. I was never going to vote for your disgusting party of course, but until now I had some respect for you personally: not any more.</p>
<p>And of course it hasn’t occurred to you that the people of Britain have been living through the worst government for a century: a government which has destroyed the economy, twice; a government which has systematically downplayed climate change thus erasing young people’s hope for a better future; a government which is destroying communities, destroying the arts, destroying all the things its members are too stupid to understand; a government which used the pandemic as a way of handing billions to its friends. A government which has been actively trying to suppress votes to stay in power. The only comforting thing about the government you’ve been part of is that you aren’t forced to choose between malice and incompetence: it’s always both<sup><a href="#2024-03-27-mel-stride-footnote-1-definition" name="2024-03-27-mel-stride-footnote-1-return">1</a></sup>.</p>
<p>I suppose it hasn’t occurred to you that the response to living through this might be to be just a little bit depressed? No, of course it hasn’t: I don’t suppose you spend much time thinking about other people, do you?</p>
<p>And now, at long last, your party is finally facing the electoral obliteration it deserves. Nothing you think, say or do will ever matter again. That, at least, is a reason to be cheerful.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2024-03-27-mel-stride-footnote-1-definition" class="footnote-definition">
<p>with apologies to Garry Kasparov. <a href="#2024-03-27-mel-stride-footnote-1-return">↩</a></p></li></ol></div>The dealurn:https-www-tfeb-org:-fragments-2024-02-05-the-deal2024-02-05T14:53:47Z2024-02-05T14:53:47ZTim Bradshaw
<p>Here’s the deal: if I like the thing you make I will pay you for it, because people deserve to get paid for their work. If you then turn around and infest the thing I have paid you for with advertising which I cannot avoid, then fuck you.</p>
<!-- more-->
<p>In other words: today I cancelled my Amazon Prime subscription, as I should have done long ago. That is all.</p>There is no cabalurn:https-www-tfeb-org:-fragments-2024-01-29-there-is-no-cabal2024-01-29T13:25:10Z2024-01-29T13:25:10ZTim Bradshaw
<p>Everyone wants to believe in conspiracies. Some people believe that the alarmingly far-right government of the UK is conspiring with shadowy plutocrats to enrich themselves. That government itself <a href="https://www.bbc.co.uk/news/uk-politics-66965714" title="15-minute cities">apparently</a> believes in the ludicrous ‘15-minute city’ conspiracy theory, and that something variously known ‘the blob’ and ‘lefty lawyers’ is working furiously against them. Trump supporters in the US believe in more conspiracy theories than it’s easy to count. Their opponents believe that Trump is a sock puppet for Putin, or in various conspiracies called ‘disaster capitalism’. People on all sides think the Jews or, perhaps, the Muslims, are behind everything. Or is it the climate scientists?</p>
<!-- more-->
<p>Here’s the thing: it’s all nonsense. The illuminati do not exist. There is no cabal. If you think there is, you need to get out more.</p>
<p>It’s pretty obvious that large-scale conspiracies are all but impossible: humans are just not very good either at keeping secrets or at running large organisations effectively. And, of course, <a href="https://doi.org/10.1371/journal.pone.0147905" title="On the viability of conspiratorial beliefs">you can make this formal</a>, if you want to.</p>
<p>In fact this is very easy to see. If you say that a person has some chance, \(c\), of leaking information about some conspiracy they’re involved in each year, then if you have \(n\) people conspiring for \(m\) years, and if all of them independently may leak information, then the chance that the conspiracy leaks after this time is</p>
<p>\[1 - (1 - c)^{nm}\]</p>
<p>What this means is easy to see. For a conspiracy where each person has a 1% chance of leaking it each year you get a picture like this:</p>
<div class="figure"><img src="/fragments/img/2024/there-is-no-cabal/chance-one-percent.svg" alt="Chance of a conspiracy leaking, 1%/person/year" />
<p class="caption">Chance of a conspiracy leaking, 1%/person/year</p></div>
<p>For a conspiracry where the chance is 0.1%/person/year then you get this:</p>
<div class="figure"><img src="/fragments/img/2024/there-is-no-cabal/chance-tenth-percent.svg" alt="Chance of a conspiracy leaking, 0.1%/person/year" />
<p class="caption">Chance of a conspiracy leaking, 0.1%/person/year</p></div>
<p>And I don’t know if this is clearer or not, but here is the 1% graph in 3d:</p>
<div class="figure"><img src="/fragments/img/2024/there-is-no-cabal/chance-one-percent-3d.svg" alt="Chance of a conspiracy leaking, 0.1%/person/year" />
<p class="caption">Chance of a conspiracy leaking, 0.1%/person/year</p></div>
<p>Well, you can see that the situation looks pretty hopeless: large conspiracies which last a long time are just doomed to leak. Of course real conspiracies aren’t made up of people, all of whom know everything, and who decide, randomly and independently, whether they should leak information each year: they’re more complicated than that. But that doesn’t make them more plausible.</p>
<p>Large, long-lived conspiracies are <em>extremely</em> implausible.</p>
<p>If you don’t want to think about the maths just look at the world. Look at the catastrophic, chaotic mess that is the current UK government. Look at the appalling series of disasters that was the Trump administration<sup><a href="#2024-01-29-there-is-no-cabal-footnote-1-definition" name="2024-01-29-there-is-no-cabal-footnote-1-return">1</a></sup>. These are not people capable of conspiring with themselves, let along anyone else. Trump is not a smart person, and neither is Rishi Sunak.</p>
<p>Look at Putin and Ukraine.</p>
<p>Or look at the supposed great genius of private enterprise: Elon Musk. I mean, it’s impossible not to laugh at the mess he’s made of Twitter.</p>
<p>It’s not that these people can’t do harm: they can do, and are doing, enormous harm. If Trump wins another term, we’re all fucked. But the harm they do is not being done by some clever hidden scheming: they’re doing it in plain sight. They’re doing it both because they are exactly the evil shits they seem to be, and because they are grotesquely incompetent.</p>
<p>It’s not that they’re not corrupt: <a href="https://www.transparency.org/en/cpi/2023">they are very corrupt</a>. But that corruption is <em>obvious</em>: the reason they don’t get caught is because they run the government. And that’s not a conspiracy: we know they run the government because many of them <em>are</em> the government. When all these government ministers ‘lose’ WhatsApps from 2020–2021 we know what that means: they’re not hiding it, they’re lying, we know they’re lying, and they know we know they’re lying. It’s not a conspiracy, it’s in plain sight.</p>
<p>The problem is not, in fact, conspiracies, it’s the <em>belief</em> in conspiracies. If you believe that some hidden group of people are behind everything (as Trump supporters do, and as many, many other people do), then you <em>believe a thing which is false</em>, and every conculsion you draw from such a belief is therefore junk. The more you believe in conspiracies, the worse your reasoning will be.</p>
<p>The truth is out there: there is no cabal.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2024-01-29-there-is-no-cabal-footnote-1-definition" class="footnote-definition">
<p>Let’s pray it doesn’t turn out to be the <em>first</em> Trump administration. <a href="#2024-01-29-there-is-no-cabal-footnote-1-return">↩</a></p></li></ol></div>Sexism in higher educationurn:https-www-tfeb-org:-fragments-2023-11-21-sexism-in-higher-education2023-11-21T11:49:01Z2023-11-21T11:49:01ZTim Bradshaw
<p>Some time ago I wrote <a href="/fragments/2020/05/09/sexism-in-computer-science/">a post with empirical evidence for sexism in computer science</a>. I’ve since realised that the data I used then is part of a much larger data set maintained by the US <a href="https://nces.ed.gov/">National Center for Education Statistics</a>: here are some more pictures of their data.</p>
<!-- more-->
<h2 id="what-i-want-to-show">What I want to show</h2>
<p>The ratio between male & female<sup><a href="#2023-11-21-sexism-in-higher-education-footnote-1-definition" name="2023-11-21-sexism-in-higher-education-footnote-1-return">1</a></sup> students gaining degrees in various subjects has often changed dramatically over a fairly short period of time (generally about 50 years here: about two generations). Such a dramatic, rapid change shows that these ratios are not due to innate ability but to women being discouraged from studying some subjects.</p>
<p>By plotting pictures of this data these changes become much easier to see, compared with looking at large tables of numbers.</p>
<h2 id="the-data-and-how-it-was-plotted">The data and how it was plotted</h2>
<p>The data I’m using comes from <a href="https://nces.ed.gov/">NCES</a> and in particular it comes from <a href="https://nces.ed.gov/programs/digest/current_tables.asp">these tables</a>. These are, I am sure, updated fairly frequently: the data I am plotting here was fetched in November 2023<sup><a href="#2023-11-21-sexism-in-higher-education-footnote-2-definition" name="2023-11-21-sexism-in-higher-education-footnote-2-return">2</a></sup>.</p>
<p>The plots are simple-minded: they just look at the male/female ratio without taking any account of the total number of students. There is no smoothing. For fields which started very small there is therefore sometimes quite a lot of variability, especially for higher degrees.</p>
<p>The ratios were computed from the numbers of students in the tables, and in particular I didn’t use their precanned figures. Where I checked, mine are the same.</p>
<p>The data does not contain any gender information <em>other</em> than male & female: in particular it takes no account of trans people or anything like that. It’s unlikely that that data was even gathered until quite recently, of course, but in any case this is just missing from the pictures because it’s not in the data.</p>
<p>The tables contain data for three levels of degree: batchelor’s (BSc or BA I presume), master’s (MSc, MA, MPhil I presume) and doctor’s (PhD). Those have been plotted separately.</p>
<p>I’ve used the starting year of any given academic year: 1984–85 turns into 1984. I’ve not generally plotted the data before 1970 so all the graphs have the same start date: where there is data before 1970 in the tables it is generally decadal. The end dates are the end date in the data which is slightly variable.</p>
<p>The plots have a y-axis which runs from 0 to 50% or to the nearest multiple of 5% above the maximum female percentage.</p>
<p>I have looked, rather casually, for sources of similar data for the UK. I haven’t found anything. I have asked the UK <a href="https://www.ons.gov.uk/">Office for National Statistics</a> though, and if they have anything useful I will write another post.</p>
<h2 id="the-pretty-pictures">The pretty pictures</h2>
<p>This is necessarily just a fairly arbitrary selection of plots of subject areas I thought would be interesting: there is a lot more data there that I have not plotted.</p>
<h3 id="computer-and-information-sciences">Computer and information sciences</h3>
<p>This is table 325.35, and is an updated version of what I plotted <a href="/fragments/2020/05/09/sexism-in-computer-science/">previously</a>. This plot runs from 1970 to 2020.</p>
<div class="figure"><img src="/fragments/img/2023/sexism-in-higher-education/325.35.svg" alt="CS & IS graduate sex ratio, US, 1970-2020" />
<p class="caption">CS & IS graduate sex ratio, US, 1970–2020</p></div>
<p>You can see from this that the ratio has gone up since about 2010, but it is still far lower than it was in about 1984. The data for higher degrees shows far less of a bump than the data for first degrees: presumably whatever drove women out of CS courses had less effect for higher degrees. The data for doctor’s degrees is quite bumpy because the numbers are rather low: this is common in all the graphs.</p>
<h3 id="mathematics-and-statistics">Mathematics and statistics</h3>
<p>This is table 325.65. This plot runs from 1970 to 2020.</p>
<div class="figure"><img src="/fragments/img/2023/sexism-in-higher-education/325.65.svg" alt="Mathematics & statistics graduate sex ratio, US, 1970-2020" />
<p class="caption">Mathematics & statistics graduate sex ratio, US, 1970–2020</p></div>
<p>This shows pretty much no sign of a 1980s peak followed by a decline. What it does show is a slight decline after about 2000 which perhaps is visible in all three lines.</p>
<h3 id="engineering-and-and-engineering-technologies">Engineering and and engineering technologies</h3>
<p>This is table 325.45. Plot from 1970 to 2020 again.</p>
<div class="figure"><img src="/fragments/img/2023/sexism-in-higher-education/325.45.svg" alt="Engineering & engineering technologies graduate sex ratio, US, 1970-2020" />
<p class="caption">Engineering & engineering technologies graduate sex ratio, US, 1970–2020</p></div>
<p>Well, there were essentially no women studying engineering in 1970 (who knew?), but there are a lot more now. A woman studying engineering is now more likely than a man to pursue a higher degree in the subject. There might be a 1980s effect, but there definitely is something after about 2000, although it seems to have gone away now. This is a pretty dramatic picture, I think.</p>
<h3 id="physical-sciences-and-science-technologies">Physical sciences and science technologies</h3>
<p>This is table 325.70. The plot runs from 1970 to 2017, which is the most recent data.</p>
<div class="figure"><img src="/fragments/img/2023/sexism-in-higher-education/325.70.svg" alt="Physical sciences & science technologies graduate sex ratio, US, 1970-2017" />
<p class="caption">Physical sciences & science technologies graduate sex ratio, US, 1970–2017</p></div>
<p>There’s no 1980s effect. There is quite a strong post–2000 effect.</p>
<h3 id="english-language-and-literatureletters">English language and literature/letters</h3>
<p>This is table 325.50. The plot runs from 1970 to 2017, which is the most recent data.</p>
<div class="figure"><img src="/fragments/img/2023/sexism-in-higher-education/325.50.svg" alt="English language & literature/letters graduate sex ratio, US, 1970-2017" />
<p class="caption">English language & literature/letters graduate sex ratio, US, 1970–2017</p></div>
<p>More women than men do first degrees in this area, and this ratio has been pretty stable for a long time. Once upon a time not many women went on to do doctoral degrees, but the ratio has mostly caught up now.</p>
<h3 id="health-professions-and-related-programs">Health professions and related programs</h3>
<p>This is table 325.60. Plot from 1970 to 2020.</p>
<div class="figure"><img src="/fragments/img/2023/sexism-in-higher-education/325.60.svg" alt="Health professions & related programs graduate sex ratio, US, 1970-2020" />
<p class="caption">Health professions & related programs graduate sex ratio, US, 1970–2020</p></div>
<p>This has always been dominated by women. What is probably hidden in this data is that most of this dominance was nursing and related degrees, while the number of female (medical) <em>doctors</em> was rather low. However it has climbed steadily and dramatically since 1970, and in 1920 about 60% of new doctors were female.</p>
<h3 id="social-sciences-and-history">Social sciences and history</h3>
<p>This is table 325.90. The plot runs from 1970 to 2017, which is the most recent data.</p>
<div class="figure"><img src="/fragments/img/2023/sexism-in-higher-education/325.90.svg" alt="Social sciences & history graduate sex ratio, US, 1970-2017" />
<p class="caption">Social sciences & history graduate sex ratio, US, 1970–2017</p></div>
<p>Unfortunately there is no table for economics degrees: this is the best proxy I could find. It’s not terribly interesting although it does show some signs of both the 1980s and post–2000 dips.</p>
<h3 id="visual-and-performing-arts">Visual and performing arts</h3>
<p>This is table 325.95. Plot from 1970 to 2020.</p>
<div class="figure"><img src="/fragments/img/2023/sexism-in-higher-education/325.95.svg" alt="Visual & performing arts graduate sex ratio, US, 1970-2020" />
<p class="caption">Visual & performing arts graduate sex ratio, US, 1970–2020</p></div>
<p>Again, women have always slightly dominated first degrees, and have been increasingly likely to do higher degrees. There is perhaps some fall in the ratio for doctoral degrees after 2000. Is there a 1980s dip here?</p>
<h2 id="what-can-you-conclude">What can you conclude</h2>
<p>First of all and most obviously: <strong>there is no evidence for differences in innate ability between men and women here</strong>. If you look particularly at the graphs for CS & IS, engineering and physical sciences, you will see enormous changes in the percentage of women graduating in these areas within two generations. In the case of Engineering the change between 1970 and 2020 was by a <em>factor of 28</em>, from 0.8% to 23% female. The change between 1949 (the first data in the table) and 2020 was by a factor of nearly 77. It is <em>absolutely impossible</em> that such changes should be due to changes in innate ability, and it is equally impossible to even glimpse any possible differences in innate ability in the presence of this vast socially-driven change.</p>
<p>Secondly, as before, something happened in CS & IS which drove out nearly half the women who studied it in a little more than a generation. This was probably the period when achieving a degree in this area was most lucrative.</p>
<p>And this time I’ll come out and say it: I think that what did this was just obviously white male tech bros who started arriving on CS courses after the home computer revolution of the early 1980s. These are people who clearly behave just as badly towards women as you would think they would. Who would have guessed it?</p>
<p>Thirdly there may be some evidence of a decline of women in some science & engineering subjects after 2000. I don’t know what is causing that, if it’s even real.</p>
<p>Finally, things are now more equal — better in fact — than they were in 1970, although CS & IS has not recovered from its infestation of tech bros.</p>
<hr />
<p>I haven’t yet looked for tables like the ones I’ve used here categorised by ethnic group. I imagine they will tell exactly the same story: that there is no indication at all of any innate difference in ability between ethnic groups.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2023-11-21-sexism-in-higher-education-footnote-1-definition" class="footnote-definition">
<p>The data contains only information about male and female: see the next section. <a href="#2023-11-21-sexism-in-higher-education-footnote-1-return">↩</a></p></li>
<li id="2023-11-21-sexism-in-higher-education-footnote-2-definition" class="footnote-definition">
<p>I have copies of the versions I used which I can provide if anyone cares, as well as the program which plotted this. <a href="#2023-11-21-sexism-in-higher-education-footnote-2-return">↩</a></p></li></ol></div>The UK's COVID-19 enquiryurn:https-www-tfeb-org:-fragments-2023-11-15-the-uk-s-covid-19-enquiry2023-11-15T12:24:25Z2023-11-15T12:24:25ZTim Bradshaw
<p>Does the UK government’s incompetent response to COVID–19, as exposed by the enqury, offer any lessons for the future?</p>
<!-- more-->
<p> I think it does: it tells us that the UK’s education system is hugely deficient. In particular we have supposedly ‘elite’ universities offering degrees which produce graduates who believe that they are supremely entitled to govern while being, in fact, utterly unqualified to do so. People who are both completely incapable of understanding the mathematics they need to deal with something like a pandemic, climate change or many other things and so arrogant that they refuse to listen to those who do understand. Oxford’s useless PPE degree is not the only problem, but abolishing it would certainly be a good start.</p>Symbol nicknames: a broken toyurn:https-www-tfeb-org:-fragments-2023-10-12-symbol-nicknames-a-broken-toy2023-10-12T14:08:27Z2023-10-12T14:08:27ZTim Bradshaw
<p><a href="https://github.com/tfeb/symbol-nicknames">Symbol nicknames</a> allows multiple names to refer to the same symbol in supported implementations of Common Lisp. That may or may not be useful.</p>
<!-- more-->
<p>People often say the Common Lisp package system is deficient. But a lot of the same people write code which is absolutely full of explicit package prefixes in what I can only suppose is an attempt to make programs harder to read. Somehow this is meant to be made better by using package-local nicknames for packages. And let’s not mention the unspeakable idiocy that is thinking that a package name like, say, <code>XML</code> is suitable for any kind of general use at all. So forgive me if I don’t take their concerns too seriously.</p>
<p>The CL package system can’t do all the things something like the Racket module system can do. But it’s not clear that, given its job of collecting symbols into, well, packages, it could do that much more than it currently does. Probably some kind of ‘package universe’ notion such as Symbolics Genera had would be useful. But the namespace has to be anchored <em>somewhere</em>, and if you’re willing to give packages domain-structured names in the obvious way <em>and</em> spend time actually constructing a namespace for the language you want to use, it’s perfectly pleasant in my experience.</p>
<p>One thing that <em>might</em> be useful is to allow multiple names to refer to the same symbol. So for instance you might want to have <code>eq?</code> be the same symbol as <code>eq</code>:</p>
<pre class="brush: lisp"><code>> (setf (nickname-symbol "EQ?") 'eq)
eq
> (eq 'eq? 'eq)
t
> (eq? 'eq 'eq?)
t</code></pre>
<p>This allows you to construct languages which have different names for things, but where the names are translated to the underlying name efficiently. As another example, let’s say you wanted to call <code>eql</code> <code>equivalent-p</code>:</p>
<pre class="brush: lisp"><code>> (setf (nickname-symbol "EQUIVALENT-P") 'eql)
eql
> (eql 'eql 'equivalent-p)
t</code></pre>
<p>Well, now you can use <code>equivalent-p</code> as a synonym for <code>eql</code> <em>wherever</em> it occurs:</p>
<pre class="brush: lisp"><code>> (defmethod foo ((x (equivalent-p 1)))
"x is 1")
#<standard-method foo nil ((eql 1)) 801005BD23>
> (foo 1)
"x is 1"</code></pre>
<p>Symbol nicknames is not completely portable as it requires hooking string-to-symbol lookup. It is supported in LispWorks and SBCL currently: it will load in other Lisps but will complain that it can’t infect them.</p>
<p>Symbol nicknames is also not completely compatible with CL. In CL you can assume that <code>(find-symbol "FOO")</code> either returns a symbol whose name is <code>"FOO"</code> or <code>nil</code> and <code>nil</code>: with symbol nicknames you can’t. In the case where a nickname link has been followed the second value of <code>find-symbol</code> will be <code>:nickname</code>.</p>
<p>Symbol nicknames is a toy. I am not convinced that the idea is even useful, and if it is it probably needs to be thought about more than I have.</p>
<p>But it exists.</p>Government by conspiracy theoryurn:https-www-tfeb-org:-fragments-2023-10-05-government-by-conspiracy-theory2023-10-05T17:26:35Z2023-10-05T17:26:35ZTim Bradshaw
<p><a href="https://www.gov.uk/government/publications/plan-for-drivers/the-plan-for-drivers" title="The plan for drivers">Here</a> is the British government’s new ‘plan for drivers’. And here is a quote from it:</p>
<blockquote>
<p>We will explore options to stop local councils using so-called “15-minute cities”, such as in Oxford, to police people’s lives</p></blockquote>
<p>We are now ruled by people <a href="https://www.thebureauinvestigates.com/stories/2023-10-04/what-is-the-15-minute-cities-conspiracy-theory">pushing conspiracy theories</a>: either knowingly because they think that provoking further divisions in society will keep them in power, or because they believe the conspiratorial nonsense they’re peddling to be true. I don’t know which is more terrifying, but in either case these people are grotesquely unfit to be in office.</p>
<hr />
<p><a href="https://web.archive.org/web/20231003195645/https://www.gov.uk/government/publications/plan-for-drivers/the-plan-for-drivers" title="A plan for divers via archive.org, 3rd October 2023">Wayback machine link</a> because rewriting history is pretty much certain here.</p>15-minute citiesurn:https-www-tfeb-org:-fragments-2023-10-01-15-minute-cities2023-10-01T09:58:30Z2023-10-01T09:58:30ZTim Bradshaw
<p>The government of Britain wishes to stop councils — councils elected by local people — implementing schemes where essential amenities are always within a 15-minute walk for their voters.</p>
<!-- more-->
<p>Since access to essential services is clearly unimportant to them, perhaps the government would like to relocate to somewhere other than central London. May I suggest the Moon? I hear <em>Mare Moscoviense</em> is very pleasant at this time of year.</p>
<hr />
<p><a href="https://www.gov.uk/government/news/government-announces-new-long-term-plan-to-back-drivers">Government announcement</a>; <a href="http://web.archive.org/web/20230929214257/www.gov.uk/government/news/government-announces-new-long-term-plan-to-back-drivers">Wayback machine copy</a>; <a href="https://www.bbc.co.uk/news/uk-politics-66965714">BBC news article</a>.</p>The end of hopeurn:https-www-tfeb-org:-fragments-2023-09-29-the-end-of-hope2023-09-29T11:22:44Z2023-09-29T11:22:44ZTim Bradshaw
<p>Being another letter I will not send to my MP.</p>
<!-- more-->
<h2 id="dear-mr-stride">Dear Mr Stride</h2>
<p>I’d like to ask you about some recent policies of the government of which you are a member.</p>
<ul>
<li>The government is vigorously opposed to actions to reduce emissions from vehicles in cities. This will damage the health of almost everyone, and of course put further stress on the health service. But the people who will be <em>most</em> likely to die or have seriously damaged health over their lives are children.</li>
<li>The government is also vigorously opposed to actions to improve the habitability of towns and cities for pedestrians and improving road safety, such as low-traffic areas and lowered speed limits. Again the people most likely to die or be harmed by this will be children.</li>
<li>The government is in the process of reducing its commitment to addressing anthropogenic global warming, because ‘we are doing enough’. That is a lie as I am sure you know: we are not doing enough. Although the rate of current warming is extremely high, it is still fairly slow by human standards. It is hurting us today but it will get far more serious over the next few decades unless we do something serious very soon. You and I will probably not live to see things get really bad. Your children probably will, and their children <em>certainly</em> will. Yet again the people most damaged by this are children.</li></ul>
<p>These are counsels of despair: the government has simply given up hope for the future. The message sent to children and young people is that the government does not care about them, at all, and that it is entirely willing to sacrifice their lives and their futures to keep itself in power for a little longer, because that is all it can think of doing. What hope can they have for their futures when faced with behaviour like this?</p>
<p>Indeed, what hope should any of us have when our government is happy to sacrifice children to stay in power? I can see none.</p>
<p>I woud be grateful if you would answer two questions. Do you support these policies? If so, why?</p>
<p>Yours sincerely</p>
<hr />
<p>I was tempted to add the recent approval of the Rosebank oil field to the list above, but I think it does not belong there. It is obvious to anyone thinking at all about it that no oil will ever come from Rosebank, as it will be cancelled by the next government: it will have no climate impact. But there will of course be fees when the contract is cancelled, to be paid to the oil companies by the government. <em>Cui prodest scelus, is fecit</em>.</p>Numerical predictionurn:https-www-tfeb-org:-fragments-2023-07-28-numerical-prediction2023-07-28T10:39:12Z2023-07-28T10:39:12ZTim Bradshaw
<p>In late 2018, when I still worked at the Met Office, I sent a document to some people there which explained why I thought AI would come to dominate weather forecasting, and why weather forecasting organisations should be looking at AI, urgently. Today, the 28th of July 2023, there is <a href="https://www.economist.com/leaders/2023/07/27/how-ai-could-save-thousands-of-lives-through-weather-forecasting">a leader on the subject in <em>The Economist</em></a> as well as <a href="https://www.economist.com/science-and-technology/2023/07/26/how-to-better-forecast-the-weather">an extended article in its Science and Technology section</a>.</p>
<!-- more-->
<h2 id="2018">2018</h2>
<p><a href="/texts/2023/numerical-prediction.pdf">Here</a><sup><a href="#2023-07-28-numerical-prediction-footnote-1-definition" name="2023-07-28-numerical-prediction-footnote-1-return">1</a></sup> is the document I wrote in 2018: if it was ever sensitive I don’t think it is now. Here are some excerpts from it<sup><a href="#2023-07-28-numerical-prediction-footnote-2-definition" name="2023-07-28-numerical-prediction-footnote-2-return">2</a></sup>:</p>
<blockquote>
<p>Neural networks are likely to provide better weather forecasts in due course than current numerical models. If this is true then weather forecasting organisations that don’t use them will be replaced by ones that do. Even though this only may be true, weather forecasting organisations should be investigating these techniques, today.</p>
<p>[…]</p>
<p>[…] NN models are likely to be highly successful for weather prediction. However they will not be trivial to design and deploy: cargo cult NN approaches are not going to work.</p>
<p>If NN models are successful then they will largely displace hand-crafted physics-based models (GCM models such as UM<sup><a href="#2023-07-28-numerical-prediction-footnote-3-definition" name="2023-07-28-numerical-prediction-footnote-3-return">3</a></sup>). Weather forecasting is a <em>service</em>, and consumers of the service care only about how good the forecasts are rather than how they are produced.</p>
<p>If this happens then organisations involved in weather forecasting, such as the Met Office, will need to adopt NN models or cease to exist: NNs are an <em>existential threat</em> to weather forecasting organisations.</p>
<p>This means that such organisations should be investigating NN models very seriously <em>now</em> so that, in the likely case that they are successful, they are not left behind.</p>
<p>[…]</p>
<p>The traditional approach [to weather forecasting] is to understand the physics and write a system which numerically solves the equations to a lesser or greater degree of accuracy. This has been pretty successful of course.</p>
<p>An alternative approach is to not do that at all, but rather build a system which can, itself, <em>learn</em> to simulate the weather: a system which can be trained to simulate the weather, in other words, based on observations. As far as I’m aware such an approach has not been tried on any significant scale.</p>
<p>[…]</p>
<p><strong>There is copious training data.</strong> There is obviously a really huge amount of data which can be used to drive a model, which NNs love. But NN models need <em>training</em> data in general: they need to be told how well they did so they can correct their weights. And weather is almost the best example it’s possible to think of of this: if we want to predict, say, rainfall in 24 hours time, then, if we wait 24 hours, we know how much rain actually fell, and we can use that data to teach the model how do to better. <em>And this is true for everything, all the time</em>: every time the model makes <em>any</em> prediction about the state at some future time then, at that future time, we know what the state actually is and can use that information to train the model. This is the sort of situation NN people dream about.</p>
<p>[…]</p>
<p>[…] Hand-crafted models are more likely to remain sane than NN models in the early stages. There’s no rule that says that an NN won’t get some mad idea into its head and start, occasionally, making predictions which are completely physically insane.</p>
<p>[…]</p>
<p>While NN models are an almost perfect fit for weather forecasting they are, perhaps surprisingly, a terrible fit for climate modelling. This is for two reasons.</p>
<p><strong>Sparseness of training data.</strong> NNs are likely to work for weather prediction because the training data is so copious: if you want to predict the weather a given time ahead then you simply predict, wait until that amount of time has elapsed and you have training data, and then you iterate this process. You can’t do that for climate: if you want to predict the climate a century ahead you can neither wait for a century for the training data nor can you iterate the process.</p>
<p><strong>Opacity of NN models.</strong> Even if climate modelling by an NN is technically practical it’s an absolutely terrible answer to the questions people actually want to answer. If I run some NN model and it predicts 4 degrees of warming by 2100 the first thing people will ask is ‘why does it predict that?’. And the best answer to that question is ‘because some opaque blob of weights which neither I nor any human understands told me that’, which is a <em>terrible</em> answer: it’s essentially the same as ‘a voice in my head told me’. Given the political sensitivity of climate modelling this is not going to be an answer anyone will accept, and nor should they.</p>
<p>So climate modelling is a really good example of a place where a transparent physics-based model is the only reasonable answer. And that’s ultimately because the people who are interested in climate ere <em>not</em> just interested in a statistically-good prediction (whatever that even means in this case): they’re interested in <em>why</em> the prediction is what it is. Climate modelling requires hand-crafted physics-based models, and there’s no way around that.</p></blockquote>
<h2 id="2023">2023</h2>
<p>Here is an excerpt from <a href="https://www.economist.com/leaders/2023/07/27/how-ai-could-save-thousands-of-lives-through-weather-forecasting"><em>The Economist</em>’s leader</a>:</p>
<blockquote>
<p>The application of machine learning and other forms of artificial intelligence (AI) will improve things further. The supercomputers used for NWP calculate the next days’ weather on the basis of current conditions, the laws of physics and various rules of thumb; doing so at a high resolution eats up calculations by the trillion with ridiculous ease. Now machine-learning systems trained simply on past weather data can more or less match their forecasts, at least in some respects. If advances in AI elsewhere are any guide, that is only the beginning.</p></blockquote>
<p>Well, I am not some unique genius: many people could, and probably did, see what was coming when I wrote the 2018 document. I predicted that neural network approaches would come to dominate weather forecasting, and it looks like they will.</p>
<p>But what I also realised remains, I think, important, and is not addressed at all in the articles in <em>The Economist</em>. And that is this:</p>
<ul>
<li>AI, in the form of neural networks, is <em>not</em> a suitable approach to climate prediction both because the training data is inadequate, but more importantly because it is critical that climate models not only predict the climate but allow people to understand <em>why</em> they are predicting what they predict, rather than simply being an opaque blob;</li>
<li>currently climate models, at least in the Met Office and I am sure elsewhere, are to a great extent parasitic on weather models, sharing a great deal of of their code with those models.</li></ul>
<p>This means that if weather forecasting becomes dominated by opaque NN models, climate modellers will have to bear the entire cost of funding development of their models. Chances are they can’t do that.</p>
<p>An even worse outcome would be that climate modellers leap into using opaque NN models without thinking through what this means. This would hand the climate denialists who increasingly dominate the politics of the UK a weapon which they would certainly not hesitate to use.</p>
<p>When I sent the 2018 document to people in the Met Office I did not even receive an acknowledgement: I am quite sure nobody read it. I think this says a great deal about the nature of organisations like the Met Office.</p>
<p>Despite how the all this might read, I’m not at all embittered by this: if I cared about the Met Office in 2018 I certainly don’t now, four years later. If anything, I’m rather pleased that what I thought, in 2018, would happen does indeed seem to ba happening. Most importantly I want the other thing I realised in 2018 — that climate modelling <em>isn’t</em> well-suited to NN approaches and that organisations which do both weather and climate modelling need to worry about this as NN approaches to weather forecasting eat physics-based approaches alive — to exist in some form that is accessible to people. That’s why this article exists.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2023-07-28-numerical-prediction-footnote-1-definition" class="footnote-definition">
<p>The location of this document might change. <a href="https://www.tfeb.org/fragments/2023/07/28/numerical-prediction/">This post itself</a> is a better link to remember as I will update the pointer if I move the document. <a href="#2023-07-28-numerical-prediction-footnote-1-return">↩</a></p></li>
<li id="2023-07-28-numerical-prediction-footnote-2-definition" class="footnote-definition">
<p>Note that I used the term ‘neural network’, abbreviated to ‘NN’ in the document, as I did not then (and do not now) want to lazily consider neural networks to be the same thing as AI. <a href="#2023-07-28-numerical-prediction-footnote-2-return">↩</a></p></li>
<li id="2023-07-28-numerical-prediction-footnote-3-definition" class="footnote-definition">
<p>UM, the Unified Model, was the model the Met Office used for both weather and climate modelling ain 2018. <a href="#2023-07-28-numerical-prediction-footnote-3-return">↩</a></p></li></ol></div>Enabling financial crimeurn:https-www-tfeb-org:-fragments-2023-07-23-enabling-financial-crime2023-07-23T13:28:40Z2023-07-23T13:28:40ZTim Bradshaw
<p>The UK government <a href="https://www.gov.uk/government/publications/payment-account-contract-termination-and-freedom-of-expression" title="Payment account contract termination and freedom of expression: Policy statement (UK government document)">wants to force banks to explain why they close accounts</a>. This will make financial crime in the UK easier.</p>
<!-- more-->
<p>From page 8 of the <a href="https://www.gov.uk/government/publications/payment-account-contract-termination-and-freedom-of-expression" title="Payment account contract termination and freedom of expression: Policy statement (UK government document)">government document referenced above</a>:</p>
<blockquote>
<p>Government intends to make changes to existing regulations with the objective of:</p>
<ol>
<li>Improving transparency for users in receiving a clear understanding why their payment account contract has been terminated, by stating in regulation that a clear and tailored explanatory reason must be given, unless to do so would be unlawful.</li>
<li>Requiring that payment account providers must provide at least 90 days’ notice when choosing to terminate a contract, unless for a serious and uncorrected breach (such as non-payment) or other serious occurrence and clarifying that clauses in user agreements purporting to allow termination for other matters (such as brand protection) cannot be used to circumvent this.</li></ol>
<p>Lesser termination periods would exceptionally continue to be allowed, for example where a provider is obliged to terminate the contract to comply with the law, in particular, financial crime law.</p></blockquote>
<p>This will, directly, make financial crime easier in the UK. Here is an example of how it will do so. Under the laws about money laundering, there is an offence called ‘tipping off’ which is exactly what it sounds like: if you tell someone that you suspect of laundering money about your suspicions then you are tipping them off and this is, not surprisingly, illegal as it would allow them to take appropriate action to avoid being caught. In particular in <a href="https://www.cps.gov.uk/legal-guidance/money-laundering-offences" title="Money laundering offences (UK government document)">this official document</a> is this description:</p>
<blockquote>
<p><strong>Tipping Off</strong>
<br />Under <a href="https://www.legislation.gov.uk/ukpga/2002/29/section/333A">section 333A</a> it is an offence for a person to disclose information, likely to prejudice an investigation, where that information came to the person in the course of business in the regulated sector.</p>
<p>A person guilty of an offence under this section is liable on conviction on indictment to imprisonment for a term exceeding 2 years, or to a fine, or to both.</p></blockquote>
<p>This makes it illegal for a bank (which is a regulated organisation) to tell a customer that they suspect, or more likely have been told by the authorities is under suspicion, of money laundering about those suspicions. Since they can’t just leave their account or accounts open, which would, obviously, support the money laundering activities directly if they’re happening, <em>they must close the account with no explanation</em>.</p>
<p>Now here’s something which banks have worked out, but which the UK treasury apparently hasn’t. That means that they can <em>never tell anyone at all</em> why they are closing an account unilaterally. If, for instance, they tell everybody <em>except</em> people under suspicion of money laundering or other criminal activity why they are closing their accounts then, if a bank closes your account <em>and refuses to tell you why</em> you immediately know that you are under suspicion of criminal behaviour. You have, in fact, been tipped off by the bank. So they can’t tell anyone why they’re closing their account.</p>
<p><em>But that is exactly what the above proposal requires banks to do.</em> It will require</p>
<blockquote>
<p>[…] that a clear and tailored explanatory reason must be given, unless to do so would be unlawful.</p></blockquote>
<p>and it will require banks to give</p>
<blockquote>
<p>at least 90 days’ notice when choosing to terminate a contract, unless for a serious and uncorrected breach (such as non-payment) or other serious occurrence [except] for example where a provider is obliged to terminate the contract to comply with the law, in particular, financial crime law.</p></blockquote>
<p>In other words this proposed legislation will <em>require</em> banks to tip off their customers: if your bank closes your account without explanation and/or does so with less than 90 days’ notice, you will <em>know</em> they’re doing so because you are under suspicion of some criminal behaviour. If you are indeed a criminal you can then act accordingly.</p>
<p>Despite the confected indignation of Nigel Farage and his enablers, banks are not, in fact, run by of people who are ‘woke’: they’re run by people who are <em>very interested in money</em>. Bankers are not generally very nice people. Indeed, banks would <em>very much like</em> to be able to profit from criminal activity and have done so with gleeful abandon for much of history. That is, for instance, what the <a href="https://en.wikipedia.org/wiki/Banking_in_Switzerland" title="Banking in Switzerland (Wikipedia)">entire history of banking in Switzerland</a> is about:</p>
<blockquote>
<p>These secrecy laws have linked the Swiss banking system with individuals and institutions seeking to illegally evade taxes, hide assets, or generally commit financial crime.</p></blockquote>
<p>If banks can, deniably, profit from financial or other crime <em>then they will do so</em>. The UK government is in the process of enabling just that activity. Is this through stupidity or intentional action?. To paraphrase <a href="https://twitter.com/Kasparov63/status/862696528003178496">Gary Kasparov</a>: one comforting thing about the 2023 UK government is that you aren’t forced to choose between malice and incompetence. It’s always both.</p>Farragourn:https-www-tfeb-org:-fragments-2023-07-20-farrago2023-07-20T13:35:58Z2023-07-20T13:35:58ZTim Bradshaw
<p>A very rich man, on being denied a bank account available only to the extremely rich by a bank which serves only the extremely rich:</p>
<blockquote>
<p>Squealy whine squealy squealy whine cancelled squeal whinge moan</p></blockquote>
<p>An even richer man, on hearing about this outrage:</p>
<blockquote>
<p>Squealy squealy no one should be barred from using basic services for their political views whine squeal probe shock</p></blockquote>
<p>A halfwit, joining in:</p>
<blockquote>
<p>Whine whine exposes the sinister nature of much of the diversity, equity and inclusion industry squeal tantrum blob politically biased dogma whine round up the foreigners squeal small boats elite</p></blockquote>
<p>All together:</p>
<blockquote>
<p>Squealy squeaky SQUEAL whine outrage basic services for the very rich whine squealy cancel culture elite blob squeal</p></blockquote>
<p>I love the sound of entitled plutocrats whining in the morning. It smells like … victory.</p>A horrible solutionurn:https-www-tfeb-org:-fragments-2023-05-04-a-horrible-solution2023-05-04T11:33:41Z2023-05-04T11:33:41ZTim Bradshaw
<p><a href="https://www.tfeb.org/fragments/2023/05/03/two-sides-to-hygiene/">Yesterday</a> I wrote an article describing one of the ways traditional Lisp macros can be unhygienic even when they appear to be hygienic. Here’s a horrible solution to that.</p>
<!-- more-->
<p>The problem I described is that the expansion of a macro can refer to the values (usually the function values) of names, which the <em>user</em> of the macro can bind, causing the macro to fail. So, given a function</p>
<pre class="brush: lisp"><code>(defun call-with-foo (thunk)
...
(funcall thunk))</code></pre>
<p>Then the macro layer on top of it</p>
<pre class="brush: lisp"><code>(defmacro with-foo (&body forms)
`(call-with-foo (lambda () ,@forms)))</code></pre>
<p>is not hygienic so long as local functions named <code>call-with-foo</code> are allowed:</p>
<pre class="brush: lisp"><code>(flet ((call-with-foo (...) ...))
(with-foo ...))</code></pre>
<p>The <em>sensible</em> solution to this is to say, just as the standard does about symbols in the <code>CL</code> package that you are not allowed to do that.</p>
<p>Here’s another solution:</p>
<pre class="brush: lisp"><code>(defmacro with-foo (&body forms)
`(funcall (symbol-function 'call-with-foo) (lambda () ,@forms)))</code></pre>
<p>This is robust against anything short of top-level redefinition of <code>call-with-foo</code>. And you can be mostly robust even against that:</p>
<pre class="brush: lisp"><code>(defmacro with-foo (&body forms)
`(funcall (load-time-value (symbol-function 'call-with-foo))
(lambda () ,@forms)))</code></pre>
<p>This still isn’t safe against really malignant users, since the load time of the macro’s definition and its uses are not generally the same. But it’s probably fairly good.</p>
<p>I hope I never feel I have to use techniques like this.</p>Two sides to hygieneurn:https-www-tfeb-org:-fragments-2023-05-03-two-sides-to-hygiene2023-05-03T11:28:09Z2023-05-03T11:28:09ZTim Bradshaw
<p>It’s tempting to think that by being sufficiently careful about names bound by traditional Lisp macros you can write macros which are hygienic. This is not true: it’s much harder than that.</p>
<!-- more-->
<h2 id="hygienic-macros">Hygienic macros</h2>
<p>I do not fully understand all the problems which <a href="https://en.wikipedia.org/wiki/Hygienic_macro">Scheme-style hygienic macros</a> try to solve, and the implementation of the solutions is usually sufficiently difficult to understand that I have always been put off doing so, especially as the details of the implementation in <a href="https://racket-lang.org/">Racket</a>, the Scheme-related language I use most, seems to <a href="https://users.cs.utah.edu/plt/scope-sets/">change every few years</a>. I’m happy enough that I am mostly competent to <em>write</em> the macros I need in Racket, without understanding the details of the implementation.</p>
<p>Traditional Lisp macros are, to me, far more appealing because they work in such an explicit and simple way: you could pretty easily write a macroexpander which did most of what the Common Lisp macroexpander does, for instance. I have written several toy versions of such a thing: I’m sure most Lisp people have. Traditional Lisp macros are just functions between bits of language expressed explicitly as s-expressions: what could be simpler?</p>
<p>In fact I am reasonably confident that, if I had to choose one, I’d choose CL’s macros over Racket’s: writing macros in raw CL is a bit annoying because you need explicit gensyms and you need to do pattern matching yourself. But you can write, and I <a href="https://tfeb.org/fragments/2022/09/26/metatronic-macros/">have</a> <a href="https://tfeb.org/fragments/2022/07/21/two-simple-pattern-matchers-for-common-lisp/">written</a> tools to make most of this go away. With these, writing macros in CL can often be very pleasant. And it’s easy to understand what is going on.</p>
<p>What is far harder though, is to make it completely hygienic. Here’s one reason why.</p>
<h2 id="several-versions-of-a-macro-in-common-lisp">Several versions of a macro in Common Lisp</h2>
<p>Let’s imagine I want a macro which allows you to select actions based on the interval a real number is in. It might look like this:</p>
<pre class="brush: lisp"><code>(interval-case x
((0 1) ...)
((1) 2) ...)
(otherwise ...))</code></pre>
<p>Here intervals are specified the way they are in type specifiers for reals:</p>
<ul>
<li><code>(a b)</code> where <code>a</code> and <code>b</code> are reals means \([a,b]\);</li>
<li><code>((a) b)</code> where <code>a</code> and <code>b</code> are reals means \((a,b]\);</li>
<li>and so on.</li></ul>
<p>There can be only one interval per clause, for simplicity.</p>
<p>I will write several versions of this macro. For all of them I will use <a href="https://tfeb.github.io/#destructuring-match-for-common-lisp">dsm</a> and, later, <a href="https://tfeb.github.io/tfeb-lisp-hax/#metatronic-macros">metatronic macros</a> to make things better.</p>
<p>First of all here’s a function<sup><a href="#2023-05-03-two-sides-to-hygiene-footnote-1-definition" name="2023-05-03-two-sides-to-hygiene-footnote-1-return">1</a></sup> which, given an interval specification, returns a form which will match numbers in that interval:</p>
<pre class="brush: lisp"><code>(defun compute-interval-form (v iv)
(destructuring-match iv
(((l) (h))
(:when (and (realp l) (realp h)))
`(< ,l ,v ,h))
((l (h))
(:when (and (realp l) (realp h)))
`(and (<= ,l ,v) (< ,v ,h)))
(((l) h)
(:when (and (realp l) (realp h)))
`(and (< ,l ,v) (<= v ,h)))
((l h)
(:when (and (realp l) (realp h)))
`(<= ,l ,v ,h))
(default
(:when (member default '(t otherwise)))
t)
(otherwise
(error "~S is not an interval designator" iv))))</code></pre>
<h3 id="a-hopeless-version">A hopeless version</h3>
<p>Here is a version of this macro which is entirely hopeless:</p>
<pre class="brush: lisp"><code>(defmacro interval-case (n &body clauses)
;; Hopeless
`(cond
,@(mapcar (lambda (clause)
(destructuring-bind (iv &body forms) clause
`(,(compute-interval-form n iv) ,@forms)))
clauses)))</code></pre>
<p>It’s hopeless because of this:</p>
<pre class="brush: lisp"><code>> (let ((x 1))
(interval-case (incf x)
((1 (2)) '(1 (2)))
((2 (3)) '(2 (3)))))</code></pre>
<p>So <code>(incf x)</code> where <code>x</code> is initially <code>1</code> is apparently neither in \([1,2)\) nor \([2,3)\) which is strange. This is happening, of course, because the macro is multiply-evaluating its argument, which it should not do.</p>
<h3 id="an-obviously-unhygienic-repair">An obviously unhygienic repair</h3>
<p>So let’s try to fix that:</p>
<pre class="brush: lisp"><code>(defmacro interval-case (n &body clauses)
;; Unhygenic
`(let ((v ,n))
(cond
,@(mapcar (lambda (clause)
(destructuring-bind (iv &body forms) clause
`(,(compute-interval-form 'v iv) ,@forms)))
clauses))))</code></pre>
<p>Well, this is better:</p>
<pre class="brush: lisp"><code>> (let ((x 1))
(interval-case (incf x)
((1 (2)) '(1 (2)))
((2 (3)) '(2 (3)))))
((2) (3))</code></pre>
<p>but … not much better:</p>
<pre class="brush: lisp"><code>> (let ((x 1) (v 10))
(interval-case (incf x)
((1 (2)) nil)
((2 (3)) v)))
2</code></pre>
<p>The macro binds <code>v</code>, which shadows the outer binding of <code>v</code> and breaks everything.</p>
<h3 id="a-repair-which-might-be-hygienic">A repair which might be hygienic</h3>
<p>Here is the normal way to fix that:</p>
<pre class="brush: lisp"><code>(defmacro interval-case (n &body clauses)
;; OK
(let ((vn (make-symbol "V")))
`(let ((,vn ,n))
(cond
,@(mapcar (lambda (clause)
(destructuring-bind (iv &body forms) clause
`(,(compute-interval-form vn iv) ,@forms)))
clauses)))))</code></pre>
<p>And now</p>
<pre class="brush: lisp"><code>> (let ((x 1) (v 10))
(interval-case (incf x)
((1 (2)) nil)
((2 (3)) v)))</code></pre>
<p>Good. I think it is possible to argue that this version of the macro is hygienic, at least in terms of names.</p>
<h3 id="a-simpler-repair-using-metatronic-macros">A simpler repair using metatronic macros</h3>
<p>Here is the previous macro written using metatronic macros:</p>
<pre class="brush: lisp"><code>(defmacro/m interval-case (n &body clauses)
;; OK, easier
`(let ((<v> ,n))
(cond
,@(mapcar (lambda (clause)
(destructuring-bind (iv &body forms) clause
`(,(compute-interval-form '<v> iv) ,@forms)))
clauses))))</code></pre>
<p>This is simpler to read and should be as good.</p>
<h3 id="an-alternative-approach-">An alternative approach …</h3>
<p>Although it is not entirely natural in the case of this macro, many macros can be written by having the macro expand into a call to a function, passing another function whose body is the body of the macro as an argument. These things often exist as pairs of <code>with-</code>* (the macro) and <code>call-with-</code>* (the function).</p>
<p>We can persuade <code>interval-case</code> to work like that: it’s not a natural macro to write that way and writing it that way will end up with something almost certainly less efficient as (at least the way I’ve written it) as it needs to interpret the interval specifications at runtime rather than compile them<sup><a href="#2023-05-03-two-sides-to-hygiene-footnote-2-definition" name="2023-05-03-two-sides-to-hygiene-footnote-2-return">2</a></sup>. But I wanted to have just one example.</p>
<p>Here is <code>call/intervals</code>, the function layer:</p>
<pre class="brush: lisp"><code>(defun call/intervals (n ivs/thunks)
;; Given a real n and a list of (interval-spec thunk ...), find the
;; first spec that n matches and call its thunk.
(if (null ivs/thunks)
nil
(destructuring-bind (iv thunk . more) ivs/thunks
(if (destructuring-match iv
(((l) (h))
(:when (and (realp l) (realp h)))
(< l n h))
((l (h))
(:when (and (realp l) (realp h)))
(and (<= l n) (< n h)))
(((l) h)
(:when (and (realp l) (realp h)))
(and (< l n) (<= n h)))
((l h)
(:when (and (realp l) (realp h)))
(<= l n h))
(default
(:when (member default '(t otherwise)))
t)
(otherwise
(error "~S is not an interval designator" iv)))
(funcall thunk)
(call/intervals n more)))))</code></pre>
<p>As well, here is a ‘nospread’ variation on <code>call/intervals</code> which serves as an impedence matcher:</p>
<pre class="brush: lisp"><code>(defun call/intervals* (n &rest ivs/thunks)
;; Impedence matcher
(declare (dynamic-extent ivs/thunks))
(call/intervals n ivs/thunks))</code></pre>
<p>Now here’s the macro layer:</p>
<pre class="brush: lisp"><code>(defmacro interval-case (n &body clauses)
;; Purports to be hygienic
`(call/intervals*
,n
,@(mapcan (lambda (clause)
`(',(car clause)
(lambda () ,@(cdr clause))))
clauses)))</code></pre>
<p>So we can test this:</p>
<pre class="brush: lisp"><code>> (let ((x 1) (v 10))
(interval-case (incf x)
((1 (2)) nil)
((2 (3)) v)))
10</code></pre>
<p>So, OK, that’s good, right? This is another hygienic macro. Not so fast.</p>
<h3 id="which-is-not-hygienic">… which is not hygienic</h3>
<pre class="brush: lisp"><code>> (flet ((call/intervals* (&rest junk)
(declare (ignore junk))
86))
(interval-case 2
((1 2) 'two)))
86</code></pre>
<p>Not so hygienic, then.</p>
<h3 id="the-alternative-approach-in-racket">The alternative approach in Racket</h3>
<p>Here is a similar alternative approach implemented in Racket:</p>
<pre class="brush: racket"><code>(define (call/intervals n ivs/thunks)
;; Here ivs/thunks is a list of (iv thunk) pairs, which is not the same
;; as the CL version: that's because I can't work out how to do the
;; syntax rule otherwise.
(match ivs/thunks
['() #f]
[(list (list iv thunk) more ...)
(if
(match iv
[(list (list (? real? l))
(list (? real? h)))
(< l n h)]
[(list (? real? l)
(list (? real? h)))
(and (<= l n) (< n h))]
[(list (list (? real? l))
(? real? h))
(and (< l n) (<= n h))]
[(list (? real? l) (? real? h))
(<= l n h)]
[(or 'otherwise #t)
#t]
[_
(error 'call/intervals "~S is not an interval designator" iv)])
(thunk)
(call/intervals n more))]))
(define (call/intervals* n . ivs/thunks)
;; impedence matcher (not so useful here)
(call/intervals n ivs/thunks))
(define-syntax-rule (interval-case n (key body ...) ...)
(call/intervals* n (list 'key (thunk body ...)) ...))</code></pre>
<p>And now:</p>
<pre class="brush: racket"><code>> (call/intervals* 1 (list '(0 1) (thunk 3)))
3
> (interval-case 2
((1 2) 'two))
'two
> (let ([call/intervals* (thunk* 86)])
(interval-case 2
((1 2) 'two)))
'two
> (let ([call/intervals* (thunk* 86)])
(call/intervals* 2))
86</code></pre>
<p>In Racket this macro is hygienic.</p>
<h2 id="two-sides-to-hygiene">Two sides to hygiene</h2>
<p>So the problem here is that there are at least <em>two sides to hygiene</em> for macros:</p>
<ul>
<li>names they use, usually by binding variables but also in other ways, must not interfere with names used in the program where the macro is used;</li>
<li>the program where the macro is used must not be able to alter what names the macro <em>refers to</em> mean.</li></ul>
<p>In both cases, of course, there need to be exceptions which are part of the macro’s contract with its users: <code>with-standard-io-syntax</code> is allowed (and indeed required) to bind <code>*print-case*</code> and many other variables.</p>
<p>I think almost everyone understands the first of these problems, but the second is much less often thought about.</p>
<h2 id="dealing-with-this-problem-in-common-lisp">Dealing with this problem in Common Lisp</h2>
<p>I think a full solution to this problem in CL would be very difficult: macros would have to refer to the names they rely on by names which were somewhow unutterable by the programs that used them. Short of actually writing a fully-fledged hygienic macro system for CL this sounds impractical.</p>
<p>In practice the solution is to essentially extend what CL already does. For symbols (so, names) in the CL package there are <a href="http://www.lispworks.com/documentation/HyperSpec/Body/11_aba.htm">strong restrictions</a> on what conforming programs may do. This program is not legal CL<sup><a href="#2023-05-03-two-sides-to-hygiene-footnote-3-definition" name="2023-05-03-two-sides-to-hygiene-footnote-3-return">3</a></sup> for instance:</p>
<pre class="brush: lisp"><code>(flet ((car (x) x))
... (car ...))</code></pre>
<p>So the best answer is then, I think, to:</p>
<ul>
<li>use packages with well-defined interfaces in the form of exported symbols;</li>
<li>disallow or strongly discourage the use of internal symbols of packages by programs which are not part of the implementation of the package;</li>
<li>and finally place restrictions similar to those placed on the CL package on <em>exported</em> symbols of your packages.</li></ul>
<p>Note that package <em>locks</em> don’t answer this problem: they usually forbid the modification of various attributes of symbols and the creation or deletion of symbols, but what is needed is considerably stronger than that: it needs to be the case that you can’t establish any kind of binding, even a lexical one, for symbols in the package.</p>
<p>Is this a problem in practice? Probably not often. Do I still prefer traditional Lisp macros? Yes, I think so.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2023-05-03-two-sides-to-hygiene-footnote-1-definition" class="footnote-definition">
<p>This function what you would want to make more complicated to allow multiple intervals per clause. <a href="#2023-05-03-two-sides-to-hygiene-footnote-1-return">↩</a></p></li>
<li id="2023-05-03-two-sides-to-hygiene-footnote-2-definition" class="footnote-definition">
<p>This interpretation could be avoided by havig the compiler turn the interval specifications into one-argument functions. I think it’s still not a natural way to write this macro. <a href="#2023-05-03-two-sides-to-hygiene-footnote-2-return">↩</a></p></li>
<li id="2023-05-03-two-sides-to-hygiene-footnote-3-definition" class="footnote-definition">
<p>Assuming that <code>car</code> means ‘the symbol whose name is <code>"CAR"</code> in the <code>"COMMON-LISP"</code> package’. <a href="#2023-05-03-two-sides-to-hygiene-footnote-3-return">↩</a></p></li></ol></div>Nirvanaurn:https-www-tfeb-org:-fragments-2023-05-02-nirvana2023-05-02T13:16:58Z2023-05-02T13:16:58ZTim Bradshaw
<p>An article constructed from several emails from my friend Zyni, reproduced with her permission. Note that Zyni’s first language is not English.</p>
<!-- more-->
<p>Many people have tried to answer what is so special about Lisp by talking about many things.</p>
<p>Such as interactive development, a thing common now to many languages of course, and if you use Racket with DrRacket not in fact how development usually works there at all. Are we to cast Racket into the outer darkness?<sup><a href="#2023-05-02-nirvana-footnote-1-definition" name="2023-05-02-nirvana-footnote-1-return">1</a></sup></p>
<p>Such as CLOS, a thing specific to Common Lisp: can you not achieve Lisp enlightenment unless you program in Common Lisp? Was Lisp enlightmenent impossible before CLOS existed? What stupid ideas. Could you implement CLOS in a language which was not Lisp? Certainly you could.</p>
<p>Such as the CL condition system: a thing also specific to Common Lisp. Something also which could be implemented in any sufficiently dynamic language. Something almost nobody who writes in Common Lisp understands I think.</p>
<p>And so it goes on.</p>
<p>None of this is the answer. None of this is close to the answer. To find the answer ask <em>why</em> did these things arise in Lisp first? What is the property of Lisp which is in fact unique to Lisp and which <em>defines</em> Lisp in strict sense that if any other language had this property <em>it would be a Lisp</em>? To see answer to this you must understand <a href="https://www.tfeb.org/fragments/2022/10/03/bradshaw-s-laws/" title="Bradshaw's law">Bradshaw’s law</a> and my corollary to it:</p>
<p><strong>Bradshaw’s law.</strong> <em>All sufficiently large software systems end up being programming languages.</em></p>
<p><strong>Zyni’s corollary.</strong> <em>At whatever size you think Bradshaw’s law applies, it applies sooner than that.</em></p>
<p>This means that <em>all programming is language construction</em>.<sup><a href="#2023-05-02-nirvana-footnote-2-definition" name="2023-05-02-nirvana-footnote-2-return">2</a></sup> When you write a program you are writing a language in which to express the problem you wish to solve.</p>
<p>Now you can begin understand what is so interesting about Lisp. In almost all programming languages when you solve a problem you define a lot of new words for the language you have, and perhaps you define elaborate classifications of the nouns of the language you will allow. But you can do nothing with the structure of the language you must use because the language will not allow that: it has a fixed grammar handed down by the great and good who designed it who are sometimes not fools. And indeed you are fiercely discouraged from even understanding what it is you are doing: discouraged from understanding that you are building a new language.</p>
<p>And quite soon (sooner than you think and in fact immediately) you find you must actually have new structure, new <em>grammar</em>. But you cannot do this easily both because the language you use does not allow it and also because you do not know what it is you are doing – you do not realise that you are making a language. So probably you use a templating system or something and build an awful horror. Often this horror will have nested languages where inner languages appear in strings in outer languages. Often it will have evaluation rules so obscure and inconsistent that it is impossible for humans to write safe large programs in this language (Unix shells: I look at you). We have all seen these things.</p>
<p>And so you live out your life crawling in the dirt, never understanding what thing it is of which you are making a very bad, very unsafe, very ugly version. Because you have been taught there is only mud so all you do is pile up structures out of mud, to be washed away by the next rain. A little way over is a tribe who knows only straw and they build structures from straw which blow away in the first wind. You hate them; they hate you. Sometimes you have little wars.</p>
<p>What, on the other hand, do you do in Lisp? Well, few days ago I needed a way to express the idea of searching some (very) large structure and being able to fail in a structured way. So after ten minutes work, my program now says things like this:</p>
<pre class="brush: lisp"><code>(defun big-serch-thing (thing)
(attempting
(quick-and-dirty thing)
(try-harder thing)))
(defun try-harder (thing)
(walking-thing (node thing :level 0)
(attempting
(first-pass thing)
(desparate-fallback thing))))
(defun first-pass (thing)
...
(when doom (fail))
...)</code></pre>
<p>Well it does not matter what this does and this is not what my program is actually like, but what is clear just by looking is that <em>this language is not Common Lisp</em>. Instead it is Common Lisp extended with at least two new grammatical constructs: <code>attempting</code> with its friend <code>fail</code> which looks like a verb but in fact is a control construct really, and <code>walking-thing</code> which is some kind of new iteration construct perhaps.</p>
<p>And there is more: when you look at <code>attempting</code> you will find it is implemented (by a function which) uses a construct called <code>looping</code> which is <em>another</em> extension to Common Lisp. And similarly for <code>walking-thing</code> (which is not really called that) which uses I think four separate new grammatical constructs I do not remember.</p>
<p>And there is more: when I started this essay these constructs were mostly as I showed above, but we have decided this was wrong, so the new language is now somewhat different and somewhat richer. A few more tens of minutes of work, most of it altering the existing programs in the old language to use the new language. The new language is even defined using a language-extending construct which itself is an extension to CL’s provided ones.</p>
<p>And this is how you program in Lisp. <em>In Lisp, writing programs is building languages</em>: in Lisp to solve a problem is to first build a language in which the problem may be solved. And because doing this is so easy in Lisp, this is what you do even for very small problems: you incrementally extend the grammar of the language — not just its lexicon — to create a language in which to describe the problem.</p>
<p>Well, this is not surprising, is it? This is what the laws imply: programming <em>is</em> constructing languages, and this applies even for very small programs. What is surprising is that so few languages encourage this. And because they do not we end up with the horror we all know. Perhaps even this is not surprising: any language which supports this well will have all the characteristics of Lisp, will in fact <em>be</em> a Lisp. So no other languages do this because to do it requires being Lisp. So why is Lisp not more popular? Well, answer is fairly easy but this is discussion for another day, I think.</p>
<p>And now we see why Lisp got features first: because it could. Let us say you wish to explore an object system in Lisp. Well, perhaps you will want a class-defining construct, so you write a macro, <code>define-class</code> or something. And you wish to be able to send messages, so you write a <code>send</code> function and then you modify the readtable so <code>[o message ...]</code> is <code>(send o message ...)</code>. And perhaps you wish some new binding construct for fields so you write <code>with-fields</code> and so, and so.</p>
<p>And now you have a new language. If you were careful you may even have constructed that new language inside a single running Lisp image. And this took, perhaps, some hours. And later, you decide that no, you wish your new language to be different, so you change it. Another few hours. Eventually, in a different world, you call this part of the language ZLOS and there is a standard.</p>
<p>And this is why these linguistic innovations happen in Lisp: because Lisp is a machine for linguistic innovation. It is <em>that</em> feature of Lisp which makes it interesting, and it is <em>only</em> that feature: both because all other features derive from that one and because to have that feature is to be Lisp.</p>
<p>That is all.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2023-05-02-nirvana-footnote-1-definition" class="footnote-definition">
<p>Do not answer this or I will kill you with a stale loaf of bread. <a href="#2023-05-02-nirvana-footnote-1-return">↩</a></p></li>
<li id="2023-05-02-nirvana-footnote-2-definition" class="footnote-definition">
<p>This is exaggeration: if you define <em>no</em> names in your program you are, perhaps, not constructing a language. <a href="#2023-05-02-nirvana-footnote-2-return">↩</a></p></li></ol></div>Something unclear in the Common Lisp standardurn:https-www-tfeb-org:-fragments-2023-04-18-something-unclear-in-the-common-lisp-standard2023-04-18T09:53:46Z2023-04-18T09:53:46ZTim Bradshaw
<p>There is what I think is a confusion as to bound declarations in the Common Lisp standard. I may be wrong about this, but I think I’m correct.</p>
<!-- more-->
<h2 id="bound-and-free-declarations">Bound and free declarations</h2>
<p><a href="http://www.lispworks.com/documentation/HyperSpec/Body/03_c.htm">Declarations</a> in Common Lisp can be either <a href="http://www.lispworks.com/documentation/HyperSpec/Body/03_cd.htm">bound or free</a>:</p>
<ul>
<li>a <strong>bound</strong> declaration appears at the head of a binding form and applies to a variable or function binding made by that form;</li>
<li>a <strong>free</strong> declaration is any declaration which is not bound.</li></ul>
<p>There are declarations which do not apply to bindings, such as <code>optimize</code>: these are always free.</p>
<h2 id="examples-of-bound-and-free-declarations">Examples of bound and free declarations</h2>
<p>In the form</p>
<pre class="brush: lisp"><code>(let ((x 1))
(declare (type integer x))
...)</code></pre>
<p>the declaration is bound and applies to the binding of <code>x</code>. In the form</p>
<pre class="brush: lisp"><code>(let ((/x/ 1))
(declare (special /x/)
(optimize (speed 3)))
...)</code></pre>
<p>the <code>special</code> declaration is bound and applies to the binding of <code>/x/</code>, while the <code>optimize</code> declaration is free.</p>
<p>In the form</p>
<pre class="brush: lisp"><code>(let ((x 1))
(locally
(declare (type integer x)
(optimize speed))
...)
...)</code></pre>
<p>Both declarations are free and apply only to the body of the <code>locally</code> form.</p>
<h2 id="declarations-which-may-not-be-ignored">Declarations which may not be ignored</h2>
<p>Most declarations may be ignored by the implementation: this is the case for all type declarations, for instance. Two may not be:</p>
<ul>
<li><code>notinline</code> forbids inline compilation of the functions it names;</li>
<li><code>special</code> requires dynamic bindings to be made when it is bound, and requires references to be to dynamic, not lexical bindigns when it is free.</li></ul>
<p>I’m going to exploit the non-ignorability of <code>special</code> declarations to show a case where the confusion arises.</p>
<h2 id="the-confusion">The confusion</h2>
<p>Forms like <a href="http://www.lispworks.com/documentation/HyperSpec/Body/s_let_l.htm"><code>let*</code></a> bind <em>sequentially</em>:</p>
<pre class="brush: lisp"><code>(let* ((x 1) (y x))
...)</code></pre>
<p>first binds <code>x</code> and then binds <code>y</code> to the value of <code>x</code>. Now, I am not sure of the standard ever says this, but all implementations I have tried take this to mean that <em>the same name can be bound several times by <code>let*</code></em>:</p>
<pre class="brush: lisp"><code>(let* ((x 1) (x x))
...)</code></pre>
<p>is legal, if stylistically awful. That’s because the obvious transformation of <code>let*</code> into nested <code>let</code>s turns this into:</p>
<pre class="brush: lisp"><code>(let ((x 1))
(let ((x x))
...))</code></pre>
<p>which is clearly fine.</p>
<p>So now we come to the problem: what should this mean?</p>
<pre class="brush: lisp"><code>(let* ((x 1) (x x))
(declare (type fixnum x))
...</code></pre>
<p>Which binding of <code>x</code> does the declaration apply to? The standard does not say. In this case it might not matter, because this declaration can be ignored, but here is a case where it <em>does</em> matter:</p>
<pre class="brush: lisp"><code>(let (c)
(let* ((/x/ 1)
(/x/ (progn
(setf c (lambda () /x/))
2)))
(declare (special /x/))
(values c (lambda () /x/))))</code></pre>
<p>This expression returns two values, both of which are functions:</p>
<ul>
<li>if the first <code>/x/</code> is special then calling the first function will result in an error;</li>
<li>if the second <code>/x/</code> is special then calling the second function will result in an error.</li></ul>
<p>So using this trick you can know whether the first binding, second binding, or both bindings are affected by the <code>special</code> declaration.</p>
<p>And, again, the standard does not say which binding is affected, or whether both should be. And implementations differ. Given the following file</p>
<pre class="brush: lisp"><code>(in-package :cl-user)
(defun call-ok-p (f)
(multiple-value-bind (v c)
(ignore-errors
(funcall f)
t)
(declare (ignore c))
v))
(defun ts ()
(multiple-value-bind (one two)
(let (c)
(let* ((/x/ 1)
(/x/ (progn
(setf c (lambda () /x/))
2)))
(declare (special /x/))
(values c (lambda () /x/))))
(values (call-ok-p one)
(call-ok-p two))))
(multiple-value-bind (first-lexical second-lexical) (ts)
(format t "~&first ~:[special~;lexical~]~%~
second ~:[special~;lexical~]~%"
first-lexical second-lexical))</code></pre>
<p><strong>SBCL</strong></p>
<pre><code>first lexical
second special</code></pre>
<p><strong>CCL</strong></p>
<pre><code>first special
second special</code></pre>
<p><strong>LispWorks</strong></p>
<pre><code>first special
second special</code></pre>
<h2 id="what-should-the-answer-be">What should the answer be?</h2>
<p>I think that the interpretation taken by CCL and LispWorks is better: in forms like this declarations should apply to <em>all</em> the bindings made by the form. An alternative answer is that the declarations should apply to the <em>visible</em> bindings at the point of the declaration, which is the approach taken by SBCL.</p>
<p>It’s tempting to say that the obvious rewrite of <code>let*</code> as nested <code>let</code>s gives you the SBCL answer, but it does not. In a form like</p>
<pre class="brush: lisp"><code>(let* ((x 3) (y x))
(declare (type integer x)
(type (integer 0) y))
...)</code></pre>
<p>This must be rewritten as</p>
<pre class="brush: lisp"><code>(let ((x 3))
(declare (type integer x))
(let ((y x))
(declare (type (integer 0) y))
...))</code></pre>
<p>So the declaration for <code>x</code> must be raised out of the inner <code>let</code> so it remains bound: the implementation already has to do work to get declarations in the right place and can’t just naïvely rewrite the form.</p>
<p>I prefer the first interpretation both because I think it represents what people are likely to want more closely, but also because I think the standard could be interpreted as meaning that without being rewritten.</p>
<h2 id="does-this-matter">Does this matter?</h2>
<p>Probably only in very obscure cases! I just thought it was interesting.</p>
<hr />
<p>Thanks to vrious people on the Lisp-HUG mailing list for coming up with this.</p>Measuring some tree-traversing functionsurn:https-www-tfeb-org:-fragments-2023-03-26-measuring-some-tree-traversing-functions2023-03-26T09:25:50Z2023-03-26T09:25:50ZTim Bradshaw
<p>In a <a href="https://www.tfeb.org/fragments/2023/03/13/variations-on-a-theme/" title="Variations on a theme">previous article</a> my friend Zyni wrote some variations on a list-flattening function, some of which were ‘recursive’ and some of which ‘iterative’, managing the stack explicitly. We thought it would be interesting to see what the performance differences were, both for this function and a more useful variant which searches a tree rather than flattening it.</p>
<!-- more-->
<h2 id="what-we-measured">What we measured</h2>
<p>The code we used is <a href="https://github.com/tfeb/zyni-flatten" title="sample code">here</a><sup><a href="#2023-03-26-measuring-some-tree-traversing-functions-footnote-1-definition" name="2023-03-26-measuring-some-tree-traversing-functions-footnote-1-return">1</a></sup>. We measured four variations of each of two functions.</p>
<h3 id="list-flattening">List flattening</h3>
<p>All these functions use <a href="https://tfeb.github.io/tfeb-lisp-hax/#collecting-lists-forwards-and-accumulating-collecting" title="collecting"><code>collecting</code></a> to build their results forwards. They live in <a href="https://github.com/tfeb/zyni-flatten/blob/main/flatten-variants.lisp" title="flatten-variants.lisp"><code>flatten-variants.lisp</code></a>.</p>
<ul>
<li><code>flatten/implicit-stack</code> works in the obvious recursive way, with an implicit stack. This uses <a href="https://tfeb.github.io/tfeb-lisp-hax/#applicative-iteration-iterate" title="iterate"><code>iterate</code></a> to express the local recursive function.</li>
<li><code>flatten/explicit-stack</code> uses an explicit stack (called <code>agenda</code> in the code) represented as a vector, and uses <a href="https://tfeb.github.io/tfeb-lisp-hax/#decomposing-iteration-simple-loops" title="looping"><code>looping</code></a> to express iteration.</li>
<li><code>flatten/explicit-stack/adja</code> is like the previous function but it is willing to extend the explicit stack, which it does by using <code>adjust-array</code> and assignment.</li>
<li><code>flatten/explicit-stack/adjb</code> is like <code>flatten/explicit-stack/adja</code> but uses a local tail-recursive function to <em>bind</em> the extended stack rather than assignment.</li>
<li>Finally <code>flatten/consy-stack</code> is very close to Zyni’s original iterative solution: it represents the stack as a list. This version necessarily conses fairly copiously.</li></ul>
<h3 id="searching-cons-trees">Searching cons trees</h3>
<p>These functions, in <a href="https://github.com/tfeb/zyni-flatten/blob/main/treesearch-variants.lisp" title="treesearch-variants.lisp"><code>treesearch-variants.lisp</code></a>, correspond to the flattening variants, except they are searching for some atomic value in the tree of conses:</p>
<ul>
<li><code>search/implicit-stack</code> uses an implicit stack;</li>
<li><code>search/explicit-stack</code> uses a vector;</li>
<li><code>search/explicit-stack/adja</code> uses a vector and adjusts by assignment;</li>
<li><code>search/explicit-stack/adjb</code> uses a vector and adjusts by binding;</li>
<li><code>search/consy-stack</code> uses a consy stack.</li></ul>
<h3 id="notes-on-the-code">Notes on the code</h3>
<p>The functions all have <code>(declare (optimize (speed 3)))</code> but specifically <em>don’t</em> turn off safety or use implementation-specific settings: we wanted to test code we felt we’d be happy running, and that means code compiled with reasonable settings for safety: if you turn safety off you’re brave, foolish, or both.</p>
<p>We did not compare <code>looping</code> with <code>do</code> or <code>loop</code>: we probably should. However the expansion of <code>looping</code> is pretty straightforward:</p>
<pre class="brush: lisp"><code>(looping ((this o) (depth 0))
(declare ...)
...)</code></pre>
<p>Turns into</p>
<pre class="brush: lisp"><code>(let ((this o) (depth 0))
(declare ...)
(block nil
(tagbody
#:start
(multiple-value-setq (this depth) ...)
(go #:start))))</code></pre>
<p>The only real question here, we think is whether <code>multiple-value-setq</code> is compiled well: brief inspection implies it is. We should probably still compare the current version with more ‘native CL’ variants.</p>
<p>The variants which use a vector as a stack maintain the current element themselves: that’s because we tested using a fill pointer and <code>vector-push</code> / <code>vector-pop</code> and it was really significantly slower in both implementations.</p>
<h2 id="what-we-did">What we did</h2>
<h3 id="the-lisp-implementations-we-used">The Lisp implementations we used</h3>
<p>We used LispWorks 8.0 and very recent SBCL builds, compiled from the <code>master</code> branch no more than a few days before we ran the tests in mid March 2023.</p>
<p>In the case of SBCL we paid attention to notes and warnings during compilation. The significant one we did <em>not</em> address was that it complained vociferously about not being able to optimize calls to <code>eql</code>: that’s because we don’t know the type of the thing we are searching for: it <em>needs</em> to do the work it is trying to avoid. Apart from this the only warnings were about the computation of the new length of the agenda, which never actually happens in the tests we ran.</p>
<h3 id="the-machines-we-benchmarked-on">The machines we benchmarked on</h3>
<p>We both have M1-based Macbook Airs so this is what we used. In particular we have not run any benchmarks on x64.</p>
<h3 id="what-we-ran">What we ran</h3>
<p><code>make-car-cdr</code>, in <a href="https://github.com/tfeb/zyni-flatten/blob/main/common.lisp" title="common.lisp"><code>common.lisp</code></a>, makes a list where each element is a chain linked by cars, finally terminating in a specified element. Controlling the length of the list and the depth of the chains gives the functions more iterative or more recursive work to do respectively. The benchmarking code then made a series of suitable structures of increasing size and timed many iterations of each function on the same structure, computing the time per call. We then wrote a program in Racket to plot the results on axes of ‘breadth’ (length of the list) and ‘depth’ (depth of the car-linked chain). For the search functions the element being searched for was not in the tree so they had to do as much work as possible.</p>
<p>Life was usually arranged so that the initial agenda was big enough for the functions which used a vector as the agenda, so none of that aspect of them was teated, except for one case below. Apart from that case, the ‘vector stack’ timings refer to <code>flatten/explicit-stack</code> and <code>treesearch/explicit-stack</code>, not the adjustable-stack variants.</p>
<h2 id="some-results">Some results</h2>
<p>We timed 1,000 iterations of each call, for list lengths (breadth in the plots and below) from 30 to 1,000 in steps of 10 and depths (depth in the plots and below) from 10 to 300 in steps of 10, computing times in μs per iteration. Neither of us knows anything about how data like this should be best presented but simply plotting the performance surfaces seemed reasonable. We used bilinear interpolation to make the surface from the points<sup><a href="#2023-03-26-measuring-some-tree-traversing-functions-footnote-2-definition" name="2023-03-26-measuring-some-tree-traversing-functions-footnote-2-return">2</a></sup>.</p>
<h3 id="lispworks">LispWorks</h3>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-treesearch-implicit-vector.svg" alt="Treesearch: implicit compared with vector stack" />
<p class="caption">Treesearch: implicit compared with vector stack</p></div>
<p>This is nicely linear in both breadth and depth, and so quadratic in breadth \(\times\) depth. And it’s easy to see that for LW using the implicit stack is faster than the manually-managed stack.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-treesearch-vector-consy.svg" alt="Treesearch: vector stack compared with consy stack" />
<p class="caption">Treesearch: vector stack compared with consy stack</p></div>
<p>This compares the vector stack with the consy stack, for treesearch. The consy stack is slightly faster which surprised us. This conses a list as long as the depth of the tree for each ‘leftward’ branch, and then immediately unwinds that and throws the whole list away. So it creates significant garbage, but the allocation and garbage collection overhead together is still faster than using a vector. Consing really is (almost) free.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-treesearch-flatten.svg" alt="Treesearch compared with flatten, both with implicit stacks" />
<p class="caption">Treesearch compared with flatten, both with implicit stacks</p></div>
<p>Here is more evidence that consing is very cheap: the difference between treesearch (which does not cons) and flatten (which does) is tiny.</p>
<h3 id="sbcl">SBCL</h3>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/sbcl-treesearch-implicit-vector.svg" alt="Treesearch: implicit compared with vector stack" />
<p class="caption">Treesearch: implicit compared with vector stack</p></div>
<p>So here is SBCL. For SBCL explicitly managing the stack as a vector is significantly faster than the implicit stack. Something that is also apparent here is how variable SBCL’s timings are compared with LW’s: we don’t know why that is although we suspect it might be because SBCL’s garbage collector is more intrusive than LW’s. We also don’t know whether this variation is repeatable, or whether it’s due to a single very slow run or something like that.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/sbcl-treesearch-vector-consy.svg" alt="Treesearch: vector stack compared with consy stack" />
<p class="caption">Treesearch: vector stack compared with consy stack</p></div>
<p>For SBCL the consy stack is significantly slower than the vector stack, so for SBCL the vector stack is the fastest.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/sbcl-treesearch-flatten.svg" alt="Treesearch compared with flatten, both with implicit stacks" />
<p class="caption">Treesearch compared with flatten, both with implicit stacks</p></div>
<p>SBCL has a slightly larger difference between treesearch and flatten, with flatten being slower. There are also curious ‘waves’ in the plot as depth increases.</p>
<h3 id="lispworks-compared-with-sbcl">LispWorks compared with SBCL</h3>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-sbcl-treesearch-implicit.svg" alt="Treesearch: SBCL compared with Lispworks, implicit stacks" />
<p class="caption">Treesearch: SBCL compared with Lispworks, implicit stacks</p></div>
<p>LW is significantly faster than SBCL for implicit stacks except for very small depths.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-sbcl-treesearch-best.svg" alt="Treesearch: SBCL compared with Lispworks, best stacks" />
<p class="caption">Treesearch: SBCL compared with Lispworks, best stacks</p></div>
<p>This compares LW using an implicit stack with SBCL using an explicit vector stack. The difference is pretty small now.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-sbcl-flatten-consy.svg" alt="Flatten: SBCL compared with Lispworks, consy stacks" />
<p class="caption">Flatten: SBCL compared with Lispworks, consy stacks</p></div>
<p>This was meant to be the worst-case for both: flattening and a consy stack. But it’s not particularly informative, I think.</p>
<h3 id="the-outer-reaches-lispworks-with-a-deep-tree">The outer reaches: LispWorks with a deep tree</h3>
<p>We did one run with the maximum depth set to 10,000 with a step of 500, and maximum breadth set to 1,000 with a step of 100, averaged over 100 iterations instead of 1,000. This is too deep for LW’s stack, but LW allows stack extension, and we wrote what later became <a href="https://github.com/tfeb/tfeb-lisp-implementation-hax/blob/main/lw/modules/allowing-stack-extensions.lisp">this</a> to extend the stack as required. Note that this happens only during the first recursion into the left-hand branch of the tree so has minimal effect on performance. This also used <code>search/explicit-stack/adjb</code> for the vector stack.</p>
<div class="figure"><img src="/fragments/img/2023/zyni-flatten/lw-treesearch-implicit-vector-deep.svg" alt="Treesearch: implicit compared with consy stack, deep tree" />
<p class="caption">Treesearch: implicit compared with consy stack, deep tree</p></div>
<p>As before the implicit stack is much better for LW. This is much more bumpy than LW was for smaller depths, this might have been because the machine did other things while it was running but we don’t think so.</p>
<h2 id="some-conclusions">Some conclusions</h2>
<p>None of the differences were really large. In particular there’s no enormous advantage from managing the stack yourself.</p>
<p>Consing and the resulting garbage-collection does really seem to be very cheap, especially in LispWorks: the days of long GC pauses are long gone.</p>
<p>We were surprised that LispWorks was fairly reliably faster than SBCL: surprised enough that we ran everything several times to be sure. It’s also interesting how much smoother LW’s performance surface is in most cases.</p>
<p>It is possible that our implementations just suck, of course.</p>
<p>Mostly it’s just some pretty pictures.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2023-03-26-measuring-some-tree-traversing-functions-footnote-1-definition" class="footnote-definition">
<p>All of the functions should be portable CL. Some of the mechanism for expressing dependencies and loading things is not. However it should be easy for anyone to run this if they wish to. <a href="#2023-03-26-measuring-some-tree-traversing-functions-footnote-1-return">↩</a></p></li>
<li id="2023-03-26-measuring-some-tree-traversing-functions-footnote-2-definition" class="footnote-definition">
<p>Getting the bilinear interpolation right took longer than anything else, and perhaps longer than everything else put together. <a href="#2023-03-26-measuring-some-tree-traversing-functions-footnote-2-return">↩</a></p></li></ol></div>The absurdity of stacksurn:https-www-tfeb-org:-fragments-2023-03-25-the-absurdity-of-stacks2023-03-25T10:57:19Z2023-03-25T10:57:19ZTim Bradshaw
<p>Very often people regard the stack as a scarce, expensive resource, while the heap is plentiful and very cheap. This is absurd: the stack is memory, the heap is also memory. Deforming programs so they are ‘iterative’ in order that they do not run out of the stack we imagine to be so costly is ridiculous: if you have a program which is inherently recursive, let it be recursive.</p>
<!-- more-->
<p>In a <a href="https://www.tfeb.org/fragments/2023/03/13/variations-on-a-theme/" title="Variations on a theme">previous article</a> my friend Zyni wrote some variations on a list-flattening function<sup><a href="#2023-03-25-the-absurdity-of-stacks-footnote-1-definition" name="2023-03-25-the-absurdity-of-stacks-footnote-1-return">1</a></sup>, some of which were ‘recursive’ and some of which ‘iterative’. Of course, the ones which claim to be iterative are, in fact, recursive: any procedure which traverses a recursively-defined data structure such as a tree of conses is necessarily recursive. The ‘iterative’ versions just use an explicitly-maintained stack rather than the implicit stack provided by the language. That makes sense only if stack space is very small compared to the heap and must therefore be conserved. And, well, for many systems that’s true. But it is small only because we have administratively decided it should be small: the stack is just memory. If there is plenty of memory for the heap, there is plenty for the stack.</p>
<p>There are, or may be, arguments for why stacks needed to be small on ancient machines. The history is fascinating, but it is not relevant to today’s systems, other than tiny embedded ones. The persistent view of modern machines as giant PDP–11s has been a blight for well over two decades now: it needs to stop.</p>
<p>The argument that the stack should be small often seems to be that, if it’s not, people will write programs which run away. That’s spurious: if such a program is, in fact, iterative, then good compilers will eliminate the tail calls and it will not use stack: a small limit on the stack will not help. If it’s really recursive then why should it run out of storage before its conversion to a program which manages the stack explicitly does? Of course <em>that’s exactly what compilers which do <a href="https://en.wikipedia.org/wiki/Continuation-passing_style?wprov=sfti1" title="continuation-passing style">CPS conversion</a> already do</em>: programs written using compilers which do that won’t have these weird stack limits in the first place. But it should not be necessary to rely on a CPS-converting compiler, or to write in continuation-passing style manually to avoid stack usage: it should be used for other reasons, because the stack is not, in fact, expensive.</p>
<p>Still less should people feel the need to write programs which explicitly manage a stack except in extraordinary cases.</p>
<p>There need to be <em>some</em> limits on stack size, just as there need to be <em>some</em> limits on heap size, but making the limit on stack size far smaller than the limit on heap size simply encourages people to believe things which aren’t true, and to live in fear of recursive programs.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2023-03-25-the-absurdity-of-stacks-footnote-1-definition" class="footnote-definition">
<p>I still want to know how often functions like this are used in real life. <a href="#2023-03-25-the-absurdity-of-stacks-footnote-1-return">↩</a></p></li></ol></div>Variations on a themeurn:https-www-tfeb-org:-fragments-2023-03-13-variations-on-a-theme2023-03-13T12:36:33Z2023-03-13T12:36:33ZTim Bradshaw
<p>My friend Zyni wrote a comment to a thread on reddit with some variations on a list-flattening function. We’ve since spent some time thinking about things related to this, which is written up in a following article. Here is her comment so the following article can refer to it. Other than notes at the end the following text is Zyni’s, not mine.</p>
<!-- more-->
<h2 id="httpswwwredditcomrcommonlispcomments11o1wvmcommentjbt9n54utmsourceshareutmmediumweb2xcontext3the-reddit-comment-by-zyni"><a href="https://www.reddit.com/r/Common_Lisp/comments/11o1wvm/comment/jbt9n54/?utm_source=share&utm_medium=web2x&context=3">The reddit comment by Zyni</a></h2>
<p>First of all we all know that CL does not promise to optimize tail recursion: means that tail recursive program may generate recursive not iterative process. So recursive program in CL <em>even if tail recursive</em> is not safe on data of unknown size, assuming stack is limited.</p>
<p>But let us assume as good implementations do that tail recursion is optimized in implementation (no need for general tail calls here but is obvious nice thing if implementations do this). Certainly if we are deploying code in space we know what implementation we use and can check this.</p>
<p>So we look at this supposed wonder of code, which I rewrite slightly to use <a href="https://tfeb.github.io/tfeb-lisp-hax/#applicative-iteration-iterate" title="iterate"><code>iterate</code> macro</a> which is simply Scheme’s named-<code>let</code> to be compatible with later examples:</p>
<pre class="brush: lisp"><code>(defun flatten (o)
;; original terrible one
(iterate ftn ((x o) (accumulator '()))
(typecase x
(null accumulator)
(cons (ftn (car x) (ftn (cdr x) accumulator)))
(t (cons x accumulator)))))</code></pre>
<p>This … is really bad program. It makes an essential mistake that it wishes to build result forwards but lists wish to be built backwards, so it must therefore recurse (not tail) on cdr of structure first. But most list-based structures have little weight in car but much in cdr, so this will fail <em>even on list which is already flat</em>: <code>(flatten (make-list 100000 :initial-element 1))</code> will fail if your example fails.</p>
<p>Any person presenting this code as good example should be ashamed of self.</p>
<p>So first change: we accept that we must build lists backwards but we change program so that tail call is on cdr not car, and reverse result:</p>
<pre class="brush: lisp"><code>(defun flatten (o)
;; not TR but better on usual assumptions
(nreverse
(iterate ftn ((x o) (accumulator '()))
(typecase x
(null accumulator)
(cons (ftn (cdr x) (ftn (car x) accumulator)))
(t (cons x accumulator))))))</code></pre>
<p>This function will be fine on assumption of structures which have most weight in their cdrs, which often is true.</p>
<p>Well, you say, ugly <code>reverse</code>. OK this is easy: we simply add in a <a href="https://tfeb.github.io/tfeb-lisp-hax/#collecting-lists-forwards-and-accumulating-collecting" title="collecting"><code>collecting</code> macro</a> which allows construction of list forwards, implementation is obvious (tail pointer). Now we have done this we can also reorder calls to be more obvious (car call, not TR, is now first):</p>
<pre class="brush: lisp"><code>(defun flatten (o)
;; not TR, better on usual assumptions, no reverse
(collecting
(iterate ftn ((x o))
(typecase x
(cons
(ftn (car x))
(ftn (cdr x)))
(null)
(t (collect x))))))</code></pre>
<p>This is still not fully TR, so will fail on structures which have much weight in car.</p>
<p>Well, of course, we can deal with this as well: we use explicit agenda to move stack onto heap and turn into pure tail recursive version. First one which builds list backwards in obvious way, therefore needs <code>reverse</code> again:</p>
<pre class="brush: lisp"><code>(defun flatten (o)
;; pure TR
(iterate ftn ((agenda (list o))
(accumulator '()))
(if (null agenda)
;; can write own reverse as tail recursive of course if wish
;; to be pure of heart
(nreverse accumulator)
(destructuring-bind (this . more) agenda
(typecase this
(null
(ftn more accumulator))
(cons
(ftn (list* (car this) (cdr this) more) accumulator))
(t
(ftn more (cons this accumulator))))))))</code></pre>
<p>Assuming implementation optimizes tail recursion this will flatten completely arbitrary structure limited only by memory.</p>
<p>We can avoid this reversery of course:</p>
<pre class="brush: lisp"><code>(defun flatten (o)
;; pure TR, no reverse
(collecting
(iterate ftn ((agenda (list o)))
(when (not (null agenda))
(destructuring-bind (this . more) agenda
(typecase this
(null
(ftn more))
(cons
(ftn (list* (car this) (cdr this) more)))
(t
(collect this)
(ftn more))))))))</code></pre>
<p>As before this is limited only by memory assuming implementation optimizes tail calls.</p>
<hr />
<p>Well, I have written Lisp for only couple of years really (but have maths background). But even I can see that this idea of having to put scary label on recursive function is very bad. Instead people using such code should perhaps <em>read it and understand it</em> to see what its problems and advantages are. Radical idea, I know.</p>
<p>Finally idea that stack space is scarce may or may not be true. Example, if we rewrite original version in Racket (first Lisp I used before being lured to dark side):</p>
<pre class="brush: lisp"><code>(define (flatten o)
(let ftn ([x o] [accumulator '()])
(cond
[(null? x) accumulator]
[(cons? x) (ftn (car x) (ftn (cdr x) accumulator))]
[else (cons x accumulator)])))</code></pre>
<p>This will happily ‘flatten’ 100,000 element list and is only limited by memory available because Racket does not treat stack same way.</p>
<hr />
<p>Finally here is variant of final version using <a href="https://tfeb.github.io/tfeb-lisp-hax/#decomposing-iteration-simple-loops" title="simple loops"><code>looping</code> macro</a> which does applicative iteration: this is iterative, on any implementation:</p>
<pre class="brush: lisp"><code>(defun flatten (o)
;; Iterative
(collecting
(looping ((agenda (list o)))
(when (null agenda)
(return))
(destructuring-bind (this . more) agenda
(typecase this
(null more)
(cons (list* (car this) (cdr this) more))
(t (collect this) more))))))</code></pre>
<p><code>looping</code> part of this turns into:</p>
<pre class="brush: lisp"><code>(let ((agenda (list o)))
(block nil
(tagbody
#:start (setq agenda
(progn
(when (null agenda) (return))
(destructuring-bind (this . more) agenda
(typecase this
(null more)
(cons (list* (car this) (cdr this) more))
(t (collect this) more)))))
(go #:start))))</code></pre>
<p>which is iterative.</p>
<p>I think <code>iterate</code> one is nicer.</p>
<hr />
<h2 id="notes-from-tim">Notes from Tim</h2>
<p>English is Zyni’s third language: she wanted me to fix up the above but I refused as I find the way she writes so charming.</p>
<p>Both of us would like to know how often <code>flatten</code> is actually used: everyone seems to be very keen on it, but we can’t think of any cases where we’ve ever wanted it or anything very much like it.</p>
<p>All of the macros referenced are ‘mine’ in a somewhat loose sense: They’re all published by me, and some of them are mine, some of them were mine but have been made much better by Zyni, some of them are really hers. There are generally comments in the code. Zyni refuses to have anything but a very minimal internet presence for reasons I used to think were absurd but no longer do: you can’t be too careful when your parents and by extension you might be on the wrong side of Putin.</p>
<p>Zyni is not her real name, obviously.</p>Two tiny Lisp evaluatorsurn:https-www-tfeb-org:-fragments-2023-02-27-two-tiny-lisp-evaluators2023-02-27T14:19:38Z2023-02-27T14:19:38ZTim Bradshaw
<p>Everyone who has written Lisp has written tiny Lisp evaluators in Lisp: here are two more.</p>
<!-- more-->
<p>Following two <a href="https://tfeb.org/fragments/2023/02/22/how-to-understand-closures-in-common-lisp/">recent</a> <a href="https://tfeb.org/fragments/2023/02/27/dynamic-binding-without-special-in-common-lisp/">articles</a> I wrote on scope and extent in Common Lisp, I thought I would finish with two very tiny evaluators for dynamically and lexically bound variants on a tiny Lisp.</p>
<h2 id="the-language">The language</h2>
<p>The tiny Lisp these evaluators interpret is not minimal: it has constructs other than <code>lambda</code>, and even has assignment. But it is pretty small. Other than the binding rules the languages are identical.</p>
<ul>
<li><strong><code>λ</code></strong> & <strong><code>lambda</code></strong> are synonyms and construct procedures, which can take any number of arguments;</li>
<li><strong><code>quote</code></strong> quotes its argument;</li>
<li><strong><code>if</code></strong> is conditional expression (the else part is optional);</li>
<li><strong><code>set!</code></strong> is assignment and mutates a binding.</li></ul>
<p>That is all that exists.</p>
<p>Both evaluators understand primitives, which are usually just functions in the underlying Lisp: since the languages are Lisp–1s, you could also expose other sorts of things of course (for instance true and false values). You can provide a list of initial bindings to them to define useful primitives.</p>
<h2 id="requirements">Requirements</h2>
<p>Both evaluators rely on my <a href="https://tfeb.github.io/tfeb-lisp-hax/#applicative-iteration-iterate">iterate</a> and <a href="https://tfeb.github.io/tfeb-lisp-hax/#simple-pattern-matching-spam">spam</a> hacks: they could easily be rewritten not to do so.</p>
<h2 id="the-dynamic-evaluator">The dynamic evaluator</h2>
<p>A procedure is represented by a structure which has a list of formals and a body of one or more forms.</p>
<pre class="brush: lisp"><code>(defstruct (procedure
(:print-function
(lambda (p s d)
(declare (ignore d))
(print-unreadable-object (p s)
(format s "λ ~S" (procedure-formals p))))))
(formals '())
(body '()))</code></pre>
<p>The evaluator simply dispatches on the type of thing and then on the operator for compound forms.</p>
<pre class="brush: lisp"><code>(defun evaluate (thing bindings)
(typecase thing
(symbol
(let ((found (assoc thing bindings)))
(unless found
(error "~S unbound" thing))
(cdr found)))
(list
(destructuring-bind (op . arguments) thing
(case op
((lambda λ)
(matching arguments
((head-matches (list-of #'symbolp))
(make-procedure :formals (first arguments)
:body (rest arguments)))
(otherwise
(error "bad lambda form ~S" thing))))
((quote)
(matching arguments
((list-matches (any))
(first arguments))
(otherwise
(error "bad quote form ~S" thing))))
((if)
(matching arguments
((list-matches (any) (any))
(if (evaluate (first arguments) bindings)
(evaluate (second arguments) bindings)))
((list-matches (any) (any) (any))
(if (evaluate (first arguments) bindings)
(evaluate (second arguments) bindings)
(evaluate (third arguments) bindings)))
(otherwise
(error "bad if form ~S" thing))))
((set!)
(matching arguments
((list-matches #'symbolp (any))
(let ((found (assoc (first arguments) bindings)))
(unless found
(error "~S unbound" (first arguments)))
(setf (cdr found) (evaluate (second arguments) bindings))))
(otherwise
(error "bad set! form ~S" thing))))
(t
(applicate (evaluate (first thing) bindings)
(mapcar (lambda (form)
(evaluate form bindings))
(rest thing))
bindings)))))
(t thing)))</code></pre>
<p>The interesting thing here is that <code>applicate</code> needs to know the current set of bindings so it can extend them dynamically.</p>
<p>Here is <code>applicate</code> which has a case for primitives and procedures</p>
<pre class="brush: lisp"><code>(defun applicate (thing arguments bindings)
(etypecase thing
(function
;; a primitive
(apply thing arguments))
(procedure
(iterate bind ((vtail (procedure-formals thing))
(atail arguments)
(extended-bindings bindings))
(cond
((and (null vtail) (null atail))
(iterate eval-body ((btail (procedure-body thing)))
(if (null (rest btail))
(evaluate (first btail) extended-bindings)
(progn
(evaluate (first btail) extended-bindings)
(eval-body (rest btail))))))
((null vtail)
(error "too many arguments"))
((null atail)
(error "not enough arguments"))
(t
(bind (rest vtail)
(rest atail)
(acons (first vtail) (first atail)
extended-bindings))))))))</code></pre>
<p>The thing that makes this evaluator dynamic is that the bindings that <code>applicate</code> extends are those it was given: procedures do not remember bindings.</p>
<h2 id="the-lexical-evaluator">The lexical evaluator</h2>
<p>A procedure is represented by a structure as before, but this time it has a set of bindings associated with it: the bindings in place when it was created.</p>
<pre class="brush: lisp"><code>(defstruct (procedure
(:print-function
(lambda (p s d)
(declare (ignore d))
(print-unreadable-object (p s)
(format s "λ ~S" (procedure-formals p))))))
(formals '())
(body '())
(bindings '()))</code></pre>
<p>The evaluator is almost identical:</p>
<pre class="brush: lisp"><code>(defun evaluate (thing bindings)
(typecase thing
(symbol
(let ((found (assoc thing bindings)))
(unless found
(error "~S unbound" thing))
(cdr found)))
(list
(destructuring-bind (op . arguments) thing
(case op
((lambda λ)
(matching arguments
((head-matches (list-of #'symbolp))
(make-procedure :formals (first arguments)
:body (rest arguments)
:bindings bindings))
(otherwise
(error "bad lambda form ~S" thing))))
((quote)
(matching arguments
((list-matches (any))
(first arguments))
(otherwise
(error "bad quote form ~S" thing))))
((if)
(matching arguments
((list-matches (any) (any))
(if (evaluate (first arguments) bindings)
(evaluate (second arguments) bindings)))
((list-matches (any) (any) (any))
(if (evaluate (first arguments) bindings)
(evaluate (second arguments) bindings)
(evaluate (third arguments) bindings)))
(otherwise
(error "bad if form ~S" thing))))
((set!)
(matching arguments
((list-matches #'symbolp (any))
(let ((found (assoc (first arguments) bindings)))
(unless found
(error "~S unbound" (first arguments)))
(setf (cdr found) (evaluate (second arguments) bindings))))
(otherwise
(error "bad set! form ~S" thing))))
(t
(applicate (evaluate (first thing) bindings)
(mapcar (lambda (form)
(evaluate form bindings))
(rest thing)))))))
(t thing)))</code></pre>
<p>The differences are that when constructing a procedure the current bindings are recorded in the procedure, and it is no longer necessary to pass bindings to <code>applicate</code>.</p>
<p><code>applicate</code> is also almost identical:</p>
<pre class="brush: lisp"><code>(defun applicate (thing arguments)
(etypecase thing
(function
;; a primitive
(apply thing arguments))
(procedure
(iterate bind ((vtail (procedure-formals thing))
(atail arguments)
(extended-bindings (procedure-bindings thing)))
(cond
((and (null vtail) (null atail))
(iterate eval-body ((btail (procedure-body thing)))
(if (null (rest btail))
(evaluate (first btail) extended-bindings)
(progn
(evaluate (first btail) extended-bindings)
(eval-body (rest btail))))))
((null vtail)
(error "too many arguments"))
((null atail)
(error "not enough arguments"))
(t
(bind (rest vtail)
(rest atail)
(acons (first vtail) (first atail)
extended-bindings))))))))</code></pre>
<p>The difference is that the bindings it extends when binding arguments are the bindings which the procedure remembered, not the dynamically-current bindings, which it does not even know.</p>
<h2 id="the-difference-between-them">The difference between them</h2>
<p>Here is the example that shows how these two evaluators differ.</p>
<p>With the dynamic evaluator:</p>
<pre class="brush: lisp"><code>? ((λ (f)
((λ (x)
;; bind x to 1 around the call to f
(f))
1))
((λ (x)
;; bind x to 2 when the function that will be f is created
(λ () x))
2))
1</code></pre>
<p>The binding in effect is the dynamically current one, not the one that was in effect when the procedure was created.</p>
<p>With the lexical evaluator:</p>
<pre class="brush: lisp"><code>? ((λ (f)
((λ (x)
;; bind x to 1 around the call to f
(f))
1))
((λ (x)
;; bind x to 2 when the function that will be f is created
(λ () x))
2))
2</code></pre>
<p>Now the binding in effect is the one that existed when the procedure was created.</p>
<p>Something more interesting is how you create recursive procedures in the lexical evaluator. With suitable bindings for primitives, it’s easy to see that this can’t work:</p>
<pre class="brush: lisp"><code>((λ (length)
(length '(1 2 3)))
(λ (l)
(if (null? l)
0
(+ (length (cdr l)) 1))))</code></pre>
<p>It can’t work because <code>length</code> is not in scope in the body of <code>length</code>. it <em>will</em> work in the dynamic evaluator.</p>
<p>The first fix, which is similar to what Scheme does with <code>letrec</code>, is to use assignment to mutate the binding so it is correct:</p>
<pre class="brush: lisp"><code>((λ (length)
(set! length (λ (l)
(if (null? l)
0
(+ (length (cdr l)) 1))))
(length '(1 2 3)))
0)</code></pre>
<p>Note the initial value of <code>length</code> is never used.</p>
<p>The second fix is to use something like <a href="https://tfeb.org/fragments/2020/03/09/the-u-combinator/">the U combinator</a> (you could use Y of course: I think U is simpler to understand):</p>
<pre class="brush: lisp"><code>((λ (length)
(length '(1 2 3)))
(λ (l)
((λ (c)
(c c l 0))
(λ (c t s)
(if (null? t)
s
(c c (cdr t) (+ s 1)))))))</code></pre>
<h2 id="source-code">Source code</h2>
<p>These two evaluators, together with a rudimentary REPL which can use either of them, can be found <a href="https://github.com/tfeb/tiny-eval">here</a>.</p>Dynamic binding without special in Common Lispurn:https-www-tfeb-org:-fragments-2023-02-27-dynamic-binding-without-special-in-common-lisp2023-02-27T09:53:27Z2023-02-27T09:53:27ZTim Bradshaw
<p>In Common Lisp, dynamic bindings and lexical bindings live in the same namespace. They don’t have to.</p>
<!-- more-->
<p>Common Lisp has <a href="https://www.tfeb.org/fragments/2023/02/22/how-to-understand-closures-in-common-lisp/" title="How to understand closures in Common Lisp">two sorts of bindings for variables</a>: lexical binding and dynamic binding. Lexical binding has lexical scope — the binding is available where it is visible in source code — and indefinite extent — the binding is available as long as any code might reference it. Dynamic binding has indefinite scope — the binding is available to any code which runs between when the binding is established and when control leaves the form which established it — and dynamic extent — the binding ceases to exist when control leaves the binding form.</p>
<p>These are really two very different things. However CL places both of these kinds of bindings into the same namespace, relying on <code>special</code> declarations and proclamations to tell the system which sort of binding to create and reference for a given name.</p>
<p>That doesn’t have to be the case: it’s possible in CL to completely isolate these two namespaces from each other. This means you could write code where all variable references were to lexical bindings and where dynamic bindings were created and referenced by a completely different set of operators. Here is an example of that. Following practice in some old Lisps I will call this ‘fluid’ binding. I will also use <code>/</code> to delimit the names of fluid variables simply to distinguish them from normal variables.</p>
<pre class="brush: lisp"><code>(defun inner (varname value)
(setf (fluid-value varname) value))
(defun outer (varname value)
(call/fluid-bindings
(lambda ()
(values
(fluid-value varname)
(progn
(inner varname (1+ value))
(fluid-value varname))))
(list varname)
(list value)))</code></pre>
<p>And now</p>
<pre class="brush: lisp"><code>> (outer '/v/ 1)
1
2</code></pre>
<p>Here are a set of operators for dealing with these fluid variables:</p>
<p><strong><code>fluid-value</code></strong> accesses the value of a fluid variable.</p>
<p><strong><code>fluid-boundp</code></strong> tells you if a name is bound as a fluid variable.</p>
<p><strong><code>call/fluid-bindings</code></strong> calls a function with one or more fluid variables bound.</p>
<p><strong><code>define-fluid</code></strong> (not used above) defines a global value for a fluid variable.</p>
<p>Well, of course you can do something like this using an explicit binding stack and a single special variable to hang it from. But that’s not how this works: these ‘fluid variables’ are just CL’s dynamic variables:</p>
<pre class="brush: lisp"><code>(defun call/print-base (f base)
(call/fluid-bindings f '(*print-base*) (list base)))</code></pre>
<pre class="brush: lisp"><code>> (call/print-base
(lambda ()
*print-base*)
2)
2</code></pre>
<p>So how does this work? Well <code>fluid-value</code> and <code>fluid-boundp</code> are obvious:</p>
<pre class="brush: lisp"><code>(defun fluid-value (s)
(symbol-value s))
(defun (setf fluid-value) (n s)
(setf (symbol-value s) n))
(defun fluid-boundp (s)
(boundp s))</code></pre>
<p>And the trick now is that <em>CL gives you enough mechanism to bind named dynamic variables yourself</em>, that mechanism being <a href="http://www.lispworks.com/documentation/HyperSpec/Body/s_progv.htm" title="progv">progv</a>, which</p>
<blockquote>
<p>[…] allows binding one or more dynamic variables whose names may be determined at run time […]</p></blockquote>
<p>So now <code>call/fluid-bindings</code> just uses <code>progv</code>:</p>
<pre class="brush: lisp"><code>(defun call/fluid-bindings (f fluids values)
(progv fluids values (funcall f)))</code></pre>
<p>And finally <code>define-fluid</code> looks like this:</p>
<pre class="brush: lisp"><code>(defmacro define-fluid (var &optional (value nil)
(doc nil docp))
`(progn
(setf (fluid-value ',var) ,value)
,@(if docp
`((setf (documentation ',var 'variable) ',doc))
'())
',var))</code></pre>
<p>The interesting thing here is that there are no <code>special</code> declarations or proclamations: you can create and bind new fluid variables without any recourse to <code>special</code> at all, in a way which is completely compatible with the existing dynamic variables, because fluid variables <em>are</em> dynamic variables.</p>
<p>So one way of thinking about <code>special</code> is that it is a declaration that says ‘for this variable name, access the namespace of dynamic bindings rather than lexical bindings’. This is not really what <code>special</code> was of course in Lisps before CL — it was historically closer to an instruction to use the interpreter’s variable binding mechanism in compiled code — but you can think of it this way in CL, where the interpreter and compiler do not have separate binding rules.</p>
<p>And, of course, using something like the above, you could write code in CL where all variable bindings were lexical and dynamic variables lived entirely in their own namespace. For instance this works fine:</p>
<pre class="brush: lisp"><code>(defun f ()
(let ((x 2))
(call/fluid-bindings
(lambda ()
(values x (fluid-value 'x)))
'(x) '(3))))</code></pre>
<pre class="brush: lisp"><code>> (f)
2
3</code></pre>
<p>The reference to <code>x</code> as a variable refers to its lexical binding, while <code>(fluid-value 'x)</code> refers to its dynamic binding.</p>
<p>Whether writing code like that would be useful I am not sure: I think that the <code>*</code>-convention for dynamic variables is perfectly fine in fact. But it is perhaps interesting to see that you can think of dynamic bindings in CL this way.</p>How to understand closures in Common Lispurn:https-www-tfeb-org:-fragments-2023-02-22-how-to-understand-closures-in-common-lisp2023-02-22T13:51:07Z2023-02-22T13:51:07ZTim Bradshaw
<p>The first rule of understanding closures is that you do not talk about closures. The second rule of understanding closures in Common Lisp is that <em>you do not talk about closures</em>. These are all the rules.</p>
<!-- more-->
<p>There is a lot of elaborate bowing and scraping about closures in the Lisp community. But despite that <em>a closure isn’t actually a thing</em>: the thing people call a closure is just a function which obeys the language’s rules about the scope and extent of bindings. <em>Implementors</em> need to care about closures: users just need to understand the rules for bindings. So rather than obsessing about this magic invisible thing which doesn’t actually exist in the language, I suggest that it is far better simply to think about the rules which cover <em>bindings</em>.</p>
<h2 id="angels-and-pinheads">Angels and pinheads</h2>
<p>It’s easy to see why this has happened: <a href="http://www.lispworks.com/documentation/HyperSpec/Front/index.htm" title="HyperSpec">the CL standard</a> has a lot of discussion of <a href="http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_l.htm#lexical_closure" title="lexical closure">lexical closures</a>, <a href="http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_l.htm#lexical_environment" title="lexical environment">lexical</a> and <a href="http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_d.htm#dynamic_environment" title="dynamic environment">dynamic</a> environments and so on. So it’s tempting to think that this way of thinking about things is ‘the one true way’ because it has been blessed by those who went before us. And indeed CL does have <a href="http://www.lispworks.com/documentation/HyperSpec/Body/03_aad.htm" title="environment objects">objects representing part of the lexical environment</a> which are given to macro functions. Occasionally these are even useful. But there are <em>no</em> objects which represent closures as distinct from functions, and <em>no</em> predicates which tell you if a function is a closure or not in the standard language: closures simply do not exist as objects distinct from functions at all. They were useful, perhaps, as part of the text which <em>defined</em> the language, but they are nowhere to be found in the language itself.</p>
<p>So, with the exception of the environment objects passed to macros, <em>none</em> of these objects exist in the language. They may exist in implementations, and might even be exposed by some implementations, but from the point of the view of the language they simply do not exist: if I give you a function object you cannot know if it is a closure or not.</p>
<p>So it is strange that people spend so much time worrying about these objects which, if they even exist in the implementation, can’t be detected by anyone using the standard language. This is worrying about angels and pinheads: wouldn’t it be simpler to simply understand what the rules of the language actually say should observably happen? I think it would.</p>
<p>I am not arguing that the terminology used by the standard is wrong! All I am arguing is that, if you think you want to understand closures, you might instead be better off understanding the rules that give rise to them. And when you have done that you may suddenly find that closures have simply vanished into the mist: all you need is the rules.</p>
<h2 id="history">History</h2>
<p>Common Lisp is steeped in history: it is full of traces of the Lisps which went before it. This is intentional: one goal of CL was to enable programs written in those earlier Lisps — which were <em>all</em> Lisps at that time of course — to run without extensive modification.</p>
<p>But one place where CL <em>didn’t</em> steep itself in history is in exactly the areas that you need to understand to understand closures. Before Common Lisp (really, before Scheme), people spent a lot of time writing papers about <a href="https://en.wikipedia.org/wiki/Funarg_problem" title="the funarg problem">the funarg problem</a> and describing and implementing more-or-less complicated ways of resolving it. Then Scheme came along and decided that this was all nonsense and that it could just be made to go away by implementing the language properly. And the Common Lisp designers, who knew about Scheme, said that, well, if Scheme can do this, then we can do this as well, and so they also made it the problem vanish, although not in quite such an extreme way as Scheme did.</p>
<p>And this is now ancient history: these predecessor Lisps to CL are all at least 40 years old now. I am, just, old enough to have used some of them when they were current, but for most CL programmers these questions were resolved before they were born. The history is very interesting, but you do not need to steep yourself in it to understand closures.</p>
<h2 id="bindings">Bindings</h2>
<p>So the notion of a closure is part of the history behind CL: a hangover from the time when people worried about the funarg problem; a time before they understood that the whole problem could simply be made to go away. So, again, if you think you want to understand closures, the best approach is to understand something else: to understand <em>bindings</em>. Just as with closures, bindings do not exist as objects in the language, although you <em>can</em> make some enquiries about some kinds of bindings in CL. They are also a concept which exists in many programming languages, not just CL.</p>
<p>A <strong>binding</strong> is an association between a name — a symbol — and something. The most common binding is a variable binding, which is an association between a name and a value. There are other kinds of bindings however: the most obvious kind in CL is a function binding: an association between a name and a function object. And for example within a (possibly implicit) <code>block</code> there is a binding between the name of the block and a point to which you can jump. And there are other kinds of bindings in CL as well, and the set is extensible. <a href="http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_b.htm#binding" title="binding">The CL standard</a> only calls variable bindings ‘bindings’, but I am going to use the term more generally.</p>
<p>Bindings are established by some binding construct and are usually not first-class objects in CL: they are just as vaporous as closures and environments. Nevertheless they are a powerful and useful idea.</p>
<h2 id="what-can-be-bound">What can be bound?</h2>
<p>By far the most common kind of binding is a <strong>variable binding</strong>: an association between a name and a value. However there are other kinds of bindings: associations between names and other things. I’ll mention those briefly at the end, but in everything else that follows it’s safe to assume that ‘binding’ means ‘variable binding’ unless I say otherwise.</p>
<h2 id="scope-and-extent">Scope and extent</h2>
<p>For both variable bindings and other kinds of bindings there are two interesting questions you can ask:</p>
<ul>
<li><em>where</em> is the binding available?</li>
<li><em>when</em> is the binding visible?</li></ul>
<p>The first question is about the <a href="http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_s.htm#scope" title="scope"><strong>scope</strong></a> of the binding. The second is about the <a href="http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_e.htm#extent" title="extent"><strong>extent</strong></a> of the binding.</p>
<p>Each of these questions has (at least) two possible answers giving (at least) four possibilities. CL has bindings which use three of these possibilities and the fourth in a restricted case: two and a restricted version of a third for variable bindings, the other one for some other kinds of bindings.</p>
<p><strong>Scope.</strong> The two options are:</p>
<ul>
<li>the binding may be available only in code where the binding construct is visible;</li>
<li>or the binding may be available during all code which runs between where the binding is established and where it ends, regardless of whether the binding construct is visible.</li></ul>
<p>What does ‘visible’ mean? Well, given some binding form, it means that the bindings it establishes are visible to all the code that is inside that form in the source. So, in a form like <code>(let ((x 1)) ...)</code> the binding of <code>x</code> is visible to the code that replaces the ellipsis, including any code introduced by macroexpansion, and only to that code.</p>
<p><strong>Extent.</strong> The two options are:</p>
<ul>
<li>the binding may exist only during the time that the binding construct is active, and goes away when control leaves it;</li>
<li>or the binding may exist as long as there is any possibility of reference.</li></ul>
<p>Unfortunately the CL standard is, I think, slightly inconsistent in its naming for these options. However I’m going to use the standard’s terms with one exception. Here they are.</p>
<p><strong>Scope</strong>:</p>
<ul>
<li>when a binding is available only when visible this called <a href="http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_l.htm#lexical_scope" title="lexical scope"><strong>lexical scope</strong></a>;</li>
<li>when a binding available to all code within the binding construct this is called <a href="http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_i.htm#indefinite_scope" title="indefinite scope"><strong>indefinite scope</strong></a><sup><a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-1-definition" name="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-1-return">1</a></sup>;</li></ul>
<p><strong>Extent</strong>:</p>
<ul>
<li>when a binding ends at the end of the binding form this is called <a href="http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_d.htm#dynamic_extent" title="dynamic extent"><strong>dynamic extent</strong></a><sup><a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-2-definition" name="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-2-return">2</a></sup>;</li>
<li>when a binding available indefinitely this called <a href="http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_i.htm#indefinite_extent" title="indefinite extent"><strong>indefinite extent</strong></a>.</li></ul>
<p>The term from the standard I am <em>not</em> going to use is <a href="http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_d.htm#dynamic_scope" title="dynamic scope"><strong>dynamic scope</strong></a>, which it defines to mean the combination of indefinite scope and dynamic extent. I am not going to use this term because I think it is confusing: although it has ‘scope’ in its name it concerns both scope and extent. Instead I will introduce better, commonly used, terms below for the interesting combinations of scope and extent.</p>
<p>The four possibilities for bindings are then:</p>
<ul>
<li>lexical scope and dynamic extent;</li>
<li>lexical scope and indefinite extent;</li>
<li>indefinite scope and dynamic extent;</li>
<li>indefinite scope and indefinite extent.</li></ul>
<h2 id="the-simplest-kind-of-binding">The simplest kind of binding</h2>
<p>So then let’s ask: what is the simplest kind of binding to understand? If you are reading some code and you see a reference to a binding then what choice from the above options will make it easiest for you to understand whether that reference is valid or not?</p>
<p>Well, the first thing is that you’d like to be able to know <em>by looking at the code</em> whether a reference is valid or not. That means that the binding construct should be <em>visible</em> to you, or that the binding should have lexical scope. Compare the following two fragments of code:</p>
<pre class="brush: lisp"><code>(defun simple (x)
...
(+ x 1)
...)</code></pre>
<p>and</p>
<pre class="brush: lisp"><code>(defun confusing ()
...
(+ *x* 1)
...)</code></pre>
<p>Well, in the first one you can tell, just by looking at the code, that the reference to <code>x</code> is valid: the function, when called, establishes a binding of <code>x</code> and you can see that when reading the code. In the second one you just have to assume that the reference to <code>*x*</code> is valid: you can’t tell by reading the code whether it is or not.</p>
<p><strong>Lexical scope</strong> makes it easiest for people reading the code to understand it, and in particular it is easier to understand than indefinite scope. It is the simplest kind of scoping to understand for people reading the code.</p>
<p>So that leaves extent. Well, in the two examples above definite or indefinite extent makes no difference to how simple the code is to understand: once the functions return there’s no possibility of reference to the bindings anyway. To expose the difference we need somehow to construct some object which can refer to a binding <em>after the function has returned</em>. We need something like this:</p>
<pre class="brush: lisp"><code>(defun maker (x)
...
<construct object which refers to binding of x>)
(let ((o (maker 1)))
<use o somehow to cause it to reference the binding of x>)</code></pre>
<p>Well, what it this object going to be? What sort of things reference bindings? <em>Code</em> references bindings, and the objects which contain code are <em>functions</em><sup><a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-3-definition" name="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-3-return">3</a></sup>. What we need to do is construct and return a function:</p>
<pre class="brush: lisp"><code>(defun maker (x)
(lambda (y)
(+ x y)))</code></pre>
<p>and then cause this function to reference the binding by calling it:</p>
<pre class="brush: lisp"><code>(let ((f (maker 1)))
(funcall f 2))</code></pre>
<p>So now we can, finally, ask: what is the choice for the <em>extent</em> of the binding of <code>x</code> which makes this code simplest to understand? Well, the answer is that unless the binding of <code>x</code> remains visible to the function that is created in <code>maker</code>, this code <em>can’t work at all</em>. It would have to be the case that it was simply not legal to return functions like this from other functions. Functions, in other words, would not be first-class objects.</p>
<p>Well, OK, that’s a possibility, and it makes the above code simple to understand: it’s not legal and it’s easy to see that it is not. Except consider this small variant on the above:</p>
<pre class="brush: lisp"><code>(defun maybe-maker (x return-identity-p)
(if return-identity-p
#'identity
(lambda (y)
(+ x y))))</code></pre>
<p>There is <em>no way to know</em> from reading this code whether <code>maybe-maker</code> will return the nasty anonymous function or the innocuous <code>identity</code> function. If it is not allowed to return anonymous functions in this way then there is <em>no way of knowing</em> whether</p>
<pre class="brush: lisp"><code>(funcall (maybe-maker 1 (zerop (random 2)))
2)</code></pre>
<p>is correct or not. This is certainly not simple: in fact it is a horrible nightmare. Another way of saying this is that you’d be in a situation where</p>
<pre class="brush: lisp"><code>(let ((a 1))
(funcall (lambda ()
a)))</code></pre>
<p>would work, but</p>
<pre class="brush: lisp"><code>(funcall (let ((a 1))
(lambda ()
a)))</code></pre>
<p>would not. There are languages which work that way: those languages suck.</p>
<p>So what <em>would</em> be simple? What would be simple is to say that if a binding is visible, it is visible, and that’s the end of the story. In a function like <code>maker</code> above the binding of <code>x</code> established by <code>maker</code> is visible to the function that it returns. Therefore <em>it’s visible to the function that <code>maker</code> returns</em>: without any complicated rules or weird special cases. That means the binding must have indefinite extent.</p>
<p><strong>Indefinite extent</strong> makes it easiest for people reading the code to understand it when that code may construct and return functions, and in particular it is easier to understand than dynamic extent, which makes it essentially impossible to tell in many cases whether such code is correct or not.</p>
<p>And that’s it: lexical scope and indefinite extent, which I will call <strong>lexical binding</strong>, is the simplest binding scheme to understand for a language which has first-class functions<sup><a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-4-definition" name="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-4-return">4</a></sup>.</p>
<p>And really <em>that’s it</em>: that’s all you need to understand. Lexical scope and indefinite extent make reading code simple, and entirely explain the things people call ‘closures’ which are, in fact, simply functions which obey these simple rules.</p>
<h2 id="examples-of-the-simple-binding-rules">Examples of the simple binding rules</h2>
<p>One thing I have not mentioned before is that, in CL, bindings are <strong>mutable</strong>, which is another way of saying that CL supports assignment: assignment to variables is mutation of variable bindings. So, as a trivial example:</p>
<pre class="brush: lisp"><code>(defun maximum (list)
(let ((max (first list)))
(dolist (e (rest list) max)
(when (> e max)
(setf max e)))))</code></pre>
<p>This is very easy to understand and does not depend on the binding rules in detail.</p>
<p>But, well, bindings are mutable, so the rules which say they exist as long as they can be referred to also imply they can be mutated as long as they can be referred to: anything else would certainly not be simple. So here’s a classic example of this:</p>
<pre class="brush: lisp"><code>(defun make-incrementor (&optional (value 0))
(lambda (&optional (increment 1))
(prog1 value
(incf value increment))))</code></pre>
<p>And now:</p>
<pre class="brush: lisp"><code>> (let ((i (make-incrementor)))
(print (funcall i))
(print (funcall i))
(print (funcall i -2))
(print (funcall i))
(print (funcall i))
(values))
0
1
2
0
1</code></pre>
<p>As you can see, the function returned by <code>make-incrementor</code> is mutating the binding that it can still see.</p>
<p>What happens when two functions can see the same binding?</p>
<pre class="brush: lisp"><code>(defun make-inc-dec (&optional (value 0))
(values
(lambda ()
(prog1 value
(incf value)))
(lambda ()
(prog1 value
(decf value)))))</code></pre>
<p>And now</p>
<pre class="brush: lisp"><code>> (multiple-value-bind (inc dec) (make-inc-dec)
(print (funcall inc))
(print (funcall inc))
(print (funcall dec))
(print (funcall dec))
(print (funcall inc))
(values))
0
1
2
1
0</code></pre>
<p>Again, what happens is the simplest thing: you can see simply from reading the code that both functions can see the <em>same</em> binding of <code>value</code> and they are therefore both mutating this common binding.</p>
<p>Here is an example which demonstrates all these features: an implementation of a simple queue as a pair of functions which can see two shared bindings:</p>
<pre class="brush: lisp"><code>(defun make-queue ()
(let ((head '())
(tail nil))
(values
(lambda (thing)
;; Push thing onto the queue
(if (null head)
;; It's empty currently so set it up
(setf head (list thing)
tail head)
;; not empty: just adjust the tail
(setf (cdr tail) (list thing)
tail (cdr tail)))
thing)
(lambda ()
(cond
((null head)
;; empty
(values nil nil))
((null (cdr head))
;; will be empty: don't actually need this case but it is
;; cleaner
(values (prog1 (car head)
(setf head '()
tail nil))
t))
(t
;; will still have content
(values (pop head) t)))))))</code></pre>
<p><code>make-queue</code> will return two functions:</p>
<ul>
<li>the first takes one argument which it appends to the queue;</li>
<li>the second takes no argument and either the next element of the queue and <code>t</code> or <code>nil</code> and <code>nil</code> if the queue is empty.</li></ul>
<p>So, with this little function to drain the queue</p>
<pre class="brush: lisp"><code>(defun drain-and-print (popper)
(multiple-value-bind (value fullp) (funcall popper)
(when fullp
(print value)
(drain-and-print popper))
(values)))</code></pre>
<p>we can see this in action</p>
<pre class="brush: lisp"><code>> (multiple-value-bind (pusher popper) (make-queue)
(funcall pusher 1)
(funcall pusher 2)
(funcall pusher 3)
(drain-and-print popper))
1
2
3</code></pre>
<h2 id="a-less-simple-kind-of-binding-which-is-sometimes-very-useful">A less-simple kind of binding which is sometimes very useful</h2>
<p>Requiring bindings to be simple usually makes programs easy to read and understand. But it also makes it hard to do some things. One of those things is to control the ‘ambient state’ of a program. A simple example would be the base for printing numbers. It’s quite natural to say that ‘in this region of the program I want numbers printed in hex’.</p>
<p>If all we had was lexical binding then this becomes a nightmare: every function you call in the region you want to cause printing to happen in hex needs to take some extra argument which says ‘print in hex’. And if you then decide that, well, you’d also like some other ambient parameter, you need to provide more arguments to every function<sup><a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-5-definition" name="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-5-return">5</a></sup>. This is just horrible.</p>
<p>You might think you can do this with global variables which you temporarily set: that is both fiddly (better make sure you set it back) and problematic in the presence of multiple threads<sup><a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-6-definition" name="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-6-return">6</a></sup>.</p>
<p>A better approach is to allow <strong>dynamic bindings</strong>: bindings with indefinite scope & dynamic extent. CL has these, and at this point history becomes unavoidable: rather than have some separate construct for dynamic bindings, CL simply says that some variable bindings, and some references to variable bindings, are to be treated as having indefinite scope and dynamic extent, and you tell the system which bindings this applies to with<code>special</code> declarations / proclamations. CL does this because that’s very close to how various predecessor Lisps worked, and so makes porting programs from them to CL much easier. To make this less painful there is a convention that dynamically-bound variable names have <code>*</code>stars<code>*</code> around them, of course.</p>
<p>Dynamic bindings are so useful that if you don’t have them you really need to invent them: I have on at least two occasions implemented a dynamic binding system in Python, for instance.</p>
<p>However this is not an article on dynamic bindings so I will not write more about them here: perhaps I will write another article later.</p>
<h2 id="what-else-can-be-bound">What else can be bound?</h2>
<p>Variable bindings are by far the most common kind. But not the only kind. Other things can be bound. Here is a partial list<sup><a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-7-definition" name="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-7-return">7</a></sup>:</p>
<ul>
<li><strong>local functions</strong> have lexical scope and indefinite extent;</li>
<li><strong>block names</strong> have lexical scope and definite extent (see below);</li>
<li><strong>tag names</strong> have lexical scope and definite extent (see below);</li>
<li><strong>catch tags</strong> have indefinite scope and definite extent;</li>
<li><strong>condition handlers</strong> have indefinite scope and definite extent;</li>
<li><strong>restarts</strong> have indefinite scope and definite extent.</li></ul>
<p>The two interesting cases here are block names and tag names. Both of these have lexical scope but only definite extent. As I argued above this makes it hard to know whether references to them are valid or not. Look at this, for example:</p>
<pre class="brush: lisp"><code>(defun outer (x)
(inner (lambda (r)
(return-from outer r))
x))
(defun inner (r rp)
(if rp
r
(funcall r #'identity)))</code></pre>
<p>So then <code>(funcall (outer nil) 1)</code> will: call <code>inner</code> with a function which wants to return from <code>outer</code> and <code>nil</code>, which will cause <code>inner</code> to call that function, returning the <code>identity</code> function, which is then called by <code>funcall</code> with argument <code>1</code>: the result is 1.</p>
<p>But <code>(funcall (outer t) 1)</code> will instead return the function which wants to return from <code>outer</code>, which is then called by <code>funcall</code> which is an error since it is outside the dynamic extent of the call to <code>outer</code>.</p>
<p>And there is no way that either a human reading the code <em>or the compiler</em> can detect that this is going to happen: a very smart compiler might perhaps be able to deduce that the internal function <em>might</em> be returned from <code>outer</code>, but probably only because this is a rather simple case: for instance in</p>
<pre class="brush: lisp"><code>(defun nasty (f)
(funcall f (lambda ()
(return-from nasty t))))</code></pre>
<p>the situation is just hopeless. So this is a case where the binding rules are not as simple as you might like.</p>
<h2 id="what-is-simple">What is simple?</h2>
<p>For variable bindings I think it’s easy to see that the simplest rule for a person reading the code is lexical binding. The other question is whether that is simpler <em>for the implementation</em>. And the answer is that probably it is not: probably lexical scope and definite extent is the simplest implementationally. That certainly approximates what many old Lisps did<sup><a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-8-definition" name="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-8-return">8</a></sup>. It’s fairly easy to write a <em>bad</em> implementation of lexical binding, simply by having all functions retain all the bindings, regardless of whether they might refer to them. A <em>good</em> implementation requires more work. But CL’s approach here is that doing the right thing <em>for people</em> is more important than making the implementor’s job easier. And I think that approach has worked well.</p>
<p>On the other hand CL hasn’t done the right thing for blocks and tags: There are at least three reasons for this.</p>
<p><strong>Implementational complexity.</strong> If the bindings had lexical scope and <em>indefinite</em> extent then you would need to be able to return from a block which had already been returned from, and go to a tag from outside the extent of the form that established it. That opens an enormous can of worms both in making such an implementation work at all but also handling things like dynamic bindings, open files and so on. That’s not something the CL designers were willing to impose on implementors.</p>
<p><strong>Complexity in the specification.</strong> If CL had lexical bindings for blocks and tags then the specification of the language would need to describe what happens in all the many edge cases that arise, including cases where it is genuinely unclear what the correct thing to do is at all such as dealing with open files and so on. Nobody wanted to deal with that, I’m sure: the language specification was already seen as far too big and the effort involved would have made it bigger, later and more expensive.</p>
<p><strong>Conceptual difficulty.</strong> It might seem that making block bindings work like lexical variable bindings would make things simpler to understand. Well, that’s exactly what Scheme did with <code>call/cc</code> and <code>call/cc</code> can give rise to some of the most opaque code I have ever seen. It is often very <em>pretty</em> code, but it’s not easy to understand.</p>
<p>I think the bargain that CL has struck here is at least reasonable: to make the common case of variable bindings simple for people, and to avoid the cases where doing the right thing results in a language which is harder to understand in many cases and far harder to implement and specify.</p>
<p>Finally, once again I think that the best way to understand how closures in CL is not to understand them: instead understand the binding rules for variables, why they are simple and what they imply.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-1-definition" class="footnote-definition">
<p>indefinite scope is often called ‘dynamic scope’ although I will avoid this term as it is used by the standard to mean the combination of indefinite scope and dynamic extent. <a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-1-return">↩</a></p></li>
<li id="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-2-definition" class="footnote-definition">
<p>Dynamic extent could perhaps be called ‘definite extent’, but this is not the term that the standard uses so I will avoid it. <a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-2-return">↩</a></p></li>
<li id="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-3-definition" class="footnote-definition">
<p>Here and below I am using the term ‘function’ in the very loose sense that CL usually uses it: almost none of the ‘functions’ I will talk about are actually mathematical functions: they’re what Scheme would call ‘procedures’. <a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-3-return">↩</a></p></li>
<li id="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-4-definition" class="footnote-definition">
<p>For languages which <em>don’t</em> have first-class functions or equivalent constructs, lexical scope and definite extent is the same as lexical scope and indefinite extent, because it is not possible to return objects which can refer to bindings from the place those bindings were created. <a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-4-return">↩</a></p></li>
<li id="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-5-definition" class="footnote-definition">
<p>More likely, you would end up making every function have, for instance an <code>ambient</code> keyword argument whose value would be an alist or plist which mapped between properties of the ambient environment and values for them. All functions which might call other functions would need this extra argument, and would need to be sure to pass it down suitably. <a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-5-return">↩</a></p></li>
<li id="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-6-definition" class="footnote-definition">
<p>This can be worked around, but it’s not simple to do so. <a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-6-return">↩</a></p></li>
<li id="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-7-definition" class="footnote-definition">
<p>In other words ‘this is all I can think of right now, but there are probably others’. <a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-7-return">↩</a></p></li>
<li id="2023-02-22-how-to-understand-closures-in-common-lisp-footnote-8-definition" class="footnote-definition">
<p>Very often old Lisps had indefinite scope and definite extent in interpreted code but lexical scope and definite extent in compiled code: yes, compiled code behaved differently to interpreted code, and yes, that sucked. <a href="#2023-02-22-how-to-understand-closures-in-common-lisp-footnote-8-return">↩</a></p></li></ol></div>Another letter to Mel Stride that I will not sendurn:https-www-tfeb-org:-fragments-2023-02-21-another-letter-i-will-not-send2023-02-21T11:03:05Z2023-02-21T11:03:05ZTim Bradshaw
<p>I’d like to believe there was some purpose in writing to my MP, but I no longer do. He probably means well, but his soul has been sold, if he ever had one.</p>
<!-- more-->
<h2 id="dear-mr-stride">Dear Mr Stride,</h2>
<p>So it seems we’re going to have yet another prolonged episode where we all watch the group of rather stupid spoiled children who now make up almost all of the parliamentary tory party quarrelling with each other over a problem which is <em>entirely of their own making</em>. And, while your friends are involved in fighting whatever stupid battle it is this time, the country will be falling apart around them. But they don’t care about that, do they? Their own idiot squabbles are so much more important to them, because, after all, they are not the people using food banks, or the people not getting health care, or the people dying, and nor will they ever be. Because while they’re very stupid they are also very rich.</p>
<p>Nobody wants this: nobody wants to hear the same collection of entitled halfwits emit yet another batch of transparent lies about why the problem they created is somebody else’s fault. Who will they find to blame this time? Those nasty Europeans? Some party which hasn’t been in power for over a decade? The imaginary deep state? I don’t think they can blame the Jews out loud just yet, so I expect we’ll hear people talking about ‘citizens of the world’ again, because everyone knows who that means. I am sure the gypsies will be mentioned because apparently you can blame them for almost anything.</p>
<p>Why not do something radical: ask the people of the country who they would like to govern them the way democracies do? Please, do the right thing for once in your life and resign: if enough of your colleagues do so it will, finally, bring about the election the people of the UK so desperately need.</p>A case-like macro for regular expressionsurn:https-www-tfeb-org:-fragments-2023-01-11-a-case-like-macro-for-regular-expressions2023-01-11T18:17:29Z2023-01-11T18:17:29ZTim Bradshaw
<p>I often find myself wanting a simple <code>case</code>-like macro where the keys are regular expressions. <code>regex-case</code> is an attempt at this.</p>
<!-- more-->
<p>I use <a href="https://edicl.github.io/cl-ppcre/">CL-PPCRE</a> for the usual things regular expressions are useful for, and probably for some of the things they should not really be used for as well. I often find myself wanting a <code>case</code> like macro, where the keys are regular expressions. There is a contributed package for <a href="https://github.com/guicho271828/trivia">Trivia</a> which will do this, but Trivia is pretty overwhelming. So I gave in and wrote <code>regex-case</code> which does what I want.</p>
<p><code>regex-case</code> is a <code>case</code>-like macro. It looks like</p>
<pre class="brush: lisp"><code>(regex-case <thing>
(<pattern> (...)
<form> ...)
...
(otherwise ()
<form> ...))</code></pre>
<p>Here <code><pattern></code> is a literal regular expression, either a string or in CL-PPCRE’s s-expression parse-tree syntax for them. Unlike <code>case</code> there can only be a single pattern per clause: allowing the parse-tree syntax makes it hard to do anything else. <code>otherwise</code> (which can also be <code>t</code>) is optional but must be last.</p>
<p>The second form in a clause specifies what, if any, variables to bind on a match. As an example</p>
<pre class="brush: lisp"><code>(regex-case line
("fog\\s+(.*)\\s$" (:match m :registers (v))
...)
...)</code></pre>
<p>will bind <code>m</code> to the whole match and <code>v</code> to the substring corresponding to the first register. You can also bind match and register positions. A nice (perhaps) thing is that you can <em>not</em> bind some register variables:</p>
<pre class="brush: lisp"><code>(regex-case line
(... (:registers (_ _ v))
...)
...)</code></pre>
<p>will bind <code>v</code> to the substring corresponding to the third register. You can use <code>nil</code> instead of <code>_</code>.</p>
<p>The current state of <code>regex-case</code> is a bit preliminary: in particular I don’t like the syntax for binding thngs very much, although I can’t think of a better one. Currently therefore it’s in my collection of toys: it will probably migrate from there at some point.</p>
<p>Currently documentation is <a href="https://tfeb.github.io/tfeb-lisp-toys/#case-for-regular-expressions-regex-case">here</a> and source code is <a href="https://github.com/tfeb/tfeb-lisp-toys">here</a>.</p>The empty listurn:https-www-tfeb-org:-fragments-2022-12-16-the-empty-list2022-12-16T17:14:32Z2022-12-16T17:14:32ZTim Bradshaw
<p>My friend Zyni pointed out that someone has been getting really impressively confused and cross on reddit about empty lists, booleans and so on in Common Lisp, which led us to a discussion about what the differences between CL and Scheme really are here. Here’s a summary which we think is correct.</p>
<!-- more-->
<h2 id="a-peculiar-object-in-common-lisp2022-12-16-the-empty-list-footnote-1-definition2022-12-16-the-empty-list-footnote-1-return1">A peculiar object in Common Lisp<sup><a href="#2022-12-16-the-empty-list-footnote-1-definition" name="2022-12-16-the-empty-list-footnote-1-return">1</a></sup></h2>
<p>In Common Lisp there is a single special object, <code>nil</code>.</p>
<ul>
<li>This represents both the empty list, and the special false value, all other objects being true.</li>
<li>This object is a list and is the only list object which is not a cons.</li>
<li>As such this object is an atom, and again it is the only list object which is an atom.</li>
<li>You can take the <code>car</code> and <code>cdr</code> of this object: both of these operations return the object itself.</li>
<li>This object is also a symbol, and it is the only object which is both a list and a symbol.</li>
<li>The empty list when written as an empty list, <code>()</code>, is self-evaluating.</li></ul>
<p>Some comments.</p>
<ul>
<li>It is <em>necessary</em> that there be a special empty-list object which is a list but not a cons: the things which are not necessary are that it be a symbol, and that it represent falsity.</li>
<li>Combining the empty list and the special false object can lead to particularly good implementations perhaps.</li>
<li>The implementation of this object is always going to be a bit weird.</li>
<li>It is clear that the empty list cannot be any kind of compound form so requiring it to be quoted — requiring you to write <code>'()</code> really — serves no useful purpose. Nevertheless I (Tim) would probably rather CL did that.</li>
<li>Not having to quote <code>nil</code> on the other hand is not at all strange: any symbol can be made self-evaluating simply by <code>(defconstant s 's)</code>, for instance.</li>
<li>The graph of types in CL is a DAG, not a tree: it is not at all strange that there is an object whose type is both <code>list</code> and <code>symbol</code>.</li></ul>
<h2 id="some-entirely-mundane-things-in-common-lisp">Some entirely mundane things in Common Lisp</h2>
<ul>
<li>There is a symbol, <code>t</code> which represents the canonical true value. Nothing is magic about this symbol in any way: it could be defined by <code>(defconstant t 't)</code>.</li>
<li>There is a type, <code>boolean</code> which could be defined by <code>(deftype boolean () '(member nil t))</code>, except that it is required that <code>boolean</code> be a recognisable subtype of <code>symbol</code>. All implementations we have tried recognise <code>(member nil t)</code> as a subtype of <code>symbol</code>, but the standard does not require them to do so. Additionally <code>(type-of 't)</code> must return <code>boolean</code> we think.</li>
<li>There is a type, <code>null</code>, which could be defined by <code>(deftype null () '(member nil))</code> or <code>(deftype null () '(eql nil))</code>, with the same caveats as above, and <code>(type-of nil)</code> should return <code>null</code>.</li>
<li>There are types named <code>t</code> (top of the type graph) and <code>nil</code> (bottom of type graph).</li></ul>
<p>These mundane things are just that: they don’t require implementational magic at all.</p>
<h2 id="three-peculiar-objects-in-scheme">Three peculiar objects in Scheme</h2>
<p>In Scheme there is an object, <code>()</code>.</p>
<ul>
<li><code>()</code> is the special object that represents the empty list.</li>
<li>It does not represent false.</li>
<li>It is not a symbol.</li>
<li>It is the only list object which is not a pair (cons): <code>list?</code> is true of it but <code>pair?</code> is false.</li>
<li>You can’t take the <code>car</code> or <code>cdr</code> of it.</li>
<li>It is not self-evaluatiing.</li></ul>
<p>There is another object, <code>#f</code>.</p>
<ul>
<li><code>#f</code> is the distinguished false value and is the only false value in Scheme, all other objects being true.</li>
<li>It is not a symbol or a list but satisfies the <code>boolean?</code> predicate.</li>
<li>It is self-evaluating.</li></ul>
<p>There is another object, <code>#t</code>.</p>
<ul>
<li><code>#t</code> represents the canonical true value, but all objects other than <code>#f</code> are true.</li>
<li>It is not a symbol or a list but satisfies the <code>boolean?</code> predicate.</li>
<li>It is self-evaluating.</li></ul>
<p>Some comments. - Scheme does not have such an elaborate type system as CL and, apart from numbers, doesn’t really have subtype relations the way CL does.</p>
<h2 id="a-summary">A summary</h2>
<p>CL’s treatment of <code>nil</code> clearly makes some people very unhappy indeed. In particular they seem to think CL is somehow inconsistent, which it clearly is not. Generally this is either because they don’t understand how it works, because it doesn’t work the way they want it to work, or (usually) both. Scheme’s treatment is often cited by these people as being better. But CL requires <em>precisely one</em> implementationally-weird object, while Scheme requires two, or three if you count <code>#t</code> which you probably should. Both languages have idiosyncratic evaluation rules around these objects. Additionally it’s worth understanding that things like CL’s <code>boolean</code> type mean essentially nothing implementationally: <code>boolean</code> is just a name for a set of symbols. The only thing preventing you from defining a type like this yourself is the requirement for <code>type-of</code> to return the type.</p>
<p>Is one better than the other? No: they’re just not the same. Certainly the CL approach carries more historical baggage. Equally certainly it is perfectly consistent, and changing it would break essentially all CL programs that exist.</p>
<hr />
<p>Thanks to Zyni for most of this: I’m really writing it up just so we can remember it. We’re pretty confident about the CL part, less so about the Scheme bit.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2022-12-16-the-empty-list-footnote-1-definition" class="footnote-definition">
<p><strong>peculiar</strong>, <em>adjective</em>: having eccentric or individual variations in relation to the general or predicted pattern, as in peculiar motion or velocity. <em>noun</em>: a parish or church exempt from the jurisdiction of the ordinary or bishop in whose diocese it is placed; anything exempt from ordinary jurisdiction. <a href="#2022-12-16-the-empty-list-footnote-1-return">↩</a></p></li></ol></div>Closed as duplicate considered harmfulurn:https-www-tfeb-org:-fragments-2022-12-05-closed-as-duplicate-considered-harmful2022-12-05T16:10:07Z2022-12-05T16:10:07ZTim Bradshaw
<p>The various <a href="https://stackexchange.com/">Stack Exchange</a> sites, and specifically <a href="https://stackoverflow.com/questions/tagged/lisp">Stack Overflow</a>, seem to be some of the best places for getting reasonable answers to questions on a wide range of topics from competent people. They would be a lot better if they were not so obsessed about closing duplicates.</p>
<!-- more-->
<p>Closing duplicates seems like a good idea: having a single, canonical, question on a given topic with a single, canonical, answer seems like a good thing. It’s not.</p>
<p>The reason it’s not is that it makes two false assumptions:</p>
<ul>
<li>that a given question has a single best answer;</li>
<li>that this answer does not change over time.</li></ul>
<p>Neither of these assumptions is true for a large number of interesting questions.</p>
<p>Questions can have several good answers. I have at least three introductory books on <a href="https://en.m.wikipedia.org/wiki/Mathematical_analysis" title="analysis">analysis</a>, and not because I didn’t find the good one on the first try: I have several because they give different perspectives — different answers, in the sense of Stack Exchange — to various aspects of the subject. I have several books on introductory quantum mechanics, several books on introductory general relativity, and so it goes on. It is, simply, a delusion that there exists a single most helpful answer to many questions: pretending that there is stupidly limiting.</p>
<p>And what constitutes a good answer can change over time. If you asked, for instance, what a macro was in Lisp and what macros are good for, you would have got very different answers in 1982 than in 2022<sup><a href="#2022-12-05-closed-as-duplicate-considered-harmful-footnote-1-definition" name="2022-12-05-closed-as-duplicate-considered-harmful-footnote-1-return">1</a></sup>. The same is true for many other subjects: human knowledge is not static.</p>
<p>All of this is made worse as only the person asking a question can accept an answer: they may not do so at all or, worse, they may be asking in bad faith and accept wrong or misleading answers (yes, this happens in various Stack Exchanges).</p>
<p>The true Stack Exchange believer will now explain in great detail<sup><a href="#2022-12-05-closed-as-duplicate-considered-harmful-footnote-2-definition" name="2022-12-05-closed-as-duplicate-considered-harmful-footnote-2-return">2</a></sup> why none of this matters: people should just spend their time adding improved answers to questions which already have accepted answers rather than to new questions which will be closed as duplicates. Because, of course, the accepted answer will not be the one almost everyone looks at, and even if they don’t care about increasing their karma on Stack Exchange, they will be very happy to write answers that, in the real world, almost nobody will ever look at.</p>
<p>Yeah, right.</p>
<p>This is such a shame: Stack Exchange is a good thing, but it’s seriously damaged by this unnescessary problem. The answer is not simply to allow unrestricted duplicates, but to wait for a bit and see if a question which is, or is nearly, a duplicate has attracted new and interesting answers, and to not close it as a duplicate in that case. This would not be hard to do.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2022-12-05-closed-as-duplicate-considered-harmful-footnote-1-definition" class="footnote-definition">
<p>And even in 2022 you will get answers from people who seem not to have learned anything since 1982. <a href="#2022-12-05-closed-as-duplicate-considered-harmful-footnote-1-return">↩</a></p></li>
<li id="2022-12-05-closed-as-duplicate-considered-harmful-footnote-2-definition" class="footnote-definition">
<p>Please, don’t: I don’t have a Stack Exchange account any more and, even if I did, I would not be interested. <a href="#2022-12-05-closed-as-duplicate-considered-harmful-footnote-2-return">↩</a></p></li></ol></div>The paperclip maximizersurn:https-www-tfeb-org:-fragments-2022-10-18-the-paperclip-maximizers2022-10-18T09:05:49Z2022-10-18T09:05:49ZTim Bradshaw
<p>Or, the calls are coming from inside the house.</p>
<!-- more-->
<h2 id="the-paperclip-maximizer">The paperclip maximizer</h2>
<p>The paperclip maximizer, probably first described by <a href="https://nickbostrom.com/ethics/ai">Nick Bostrom</a>, is</p>
<blockquote>
<p>a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. […] with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities.</p></blockquote>
<p>It is often used as a parable about the dangers of AI research, and particularly of creating AIs which are smarter than us.</p>
<p>But it’s obviously a fairly silly idea: how could anything which is really intelligent be dedicated to such a meaningless, useless goal, to a goal which, if pursued relentlessly and to the exclusion of all else, will certainly result in its own extinction?</p>
<h2 id="a-sufficiency-of-paperclips">A sufficiency of paperclips</h2>
<p>The production of paperclips is not, in fact, a meaningless goal: paperclips are quite useful. The problem happens when the the production of paperclips in numbers greater than could ever be useful becomes the <em>only</em> goal.</p>
<p>But hold on. The production of wealth is not, in fact, a meaningless goal: wealth is quite useful. The problem happens when the production of wealth in amounts greater than could ever be useful becomes the <em>only</em> goal.</p>
<p>How rich do you have to be before ‘being richer’ becomes meaningless? I don’t know, but there is, quite clearly, a level at which this happens. And a large number of extremely wealthy people are far beyond this level. And yet they strive unceasingly to accumulate more
<s>paperclips</s>money, even though doing so has long lost any meaning for them or for anyone, and even though doing so is having catastrophic consequences for the future of us all.</p>
<p>Even more absurdly, we are all told by apparently well-educated and quite respectable people that endless economic growth is the cure for all our ills, even though endless economic growth means that resource requirements must grow exponentially with time, which is not physically possible. The pursuit of endless economic growth is merely the pursuit of paperclips in fancy dress: it will lead only to catastrophe.</p>
<p>Do you want to know what living in a world of uncontrolled paperclip maximizers looks like? Look around: the paperclip maximizers are us.</p>Package-local nicknamesurn:https-www-tfeb-org:-fragments-2022-10-14-package-local-nicknames2022-10-14T09:26:31Z2022-10-14T09:26:31ZTim Bradshaw
<p>What follows is an opinion. Do not under any circumstances read it. Other opinions are available (but wrong).</p>
<!-- more-->
<p>Package-local nicknames are an abomination. They should be burned with nuclear fire, and their ashes launched into space on a trajectory which will leave the Solar System.</p>
<p>The only reason why package-local nicknames matter is if you are writing a lot of code with lots of package-qualified names in it. If you are doing that then <em>you are writing code which is hard to read</em>: the names in your code are longer than they need to be and the first several characters of them are package name noise (people read, broadly from left to right). Imagine me:a la:version ge:of oe:English oe:where la:people wrote like that: it’s just horrible. If you are writing code which is hard to read you are writing bad code.</p>
<p>Instead you should do the work to construct a namespace in which the words you intend to use are directly present. This means constructing suitable packages: the files containing the package definitions are then almost the only place where package names occur, and are a minute fraction of the total code. Doing this is a good practice in itself because the package definition file is then a place which describes just what names your code needs, from where, and what names it provides. Things like conduit packages (shameless self-promotion) can help with this, which is why I wrote them: being able to say ‘this package exports the combination of the exports of these packages, except …’ or ‘this package exports just the following symbols from these packages’ in an explicit way is very useful.</p>
<p>If you are now rehearsing a litany of things that can go wrong with this approach in rare cases<sup><a href="#2022-10-14-package-local-nicknames-footnote-1-definition" name="2022-10-14-package-local-nicknames-footnote-1-return">1</a></sup>, please don’t: this is not my first rodeo and, trust me, I know about these cases. Occasionally, the CL package system can make it hard or impossible to construct the namespace you need, with the key term here being being <em>occasionally</em>: people who give up because something is occasionally hard or impossible have what Erik Naggum famously called ‘one-bit brains’<sup><a href="#2022-10-14-package-local-nicknames-footnote-2-definition" name="2022-10-14-package-local-nicknames-footnote-2-return">2</a></sup>: the answer is to <em>get more bits for your brain</em>.</p>
<p>Once you write code like this then the only place package-local nicknames can matter is, perhaps, the package definition file. And the only reason they can matter there is because people think that picking a name like ‘XML’ or ‘RPC’ or ‘SQL’ for their packages is a good idea. When people in the programming section of my hollowed-out-volcano lair do this they are … well, I will not say, but my sharks are well-fed and those things on spikes surrounding the crater are indeed their heads.</p>
<p>People should use long, unique names for packages. Java, astonishingly, got this right: use domains in big-endian order (<code>org.tfeb.conduit-packages</code>, <code>org.tfeb.hax.metatronic</code>). Do not use short nicknames. Never use names without at least one dot, which should be reserved for implementations and perhaps KMP-style substandards. Names will now not clash. Names will be longer and require more typing, but this will not matter because the only place package names are referred to are in package definition files and in <code>in-package</code> forms, which are a minute fraction of your code.</p>
<p>I have no idea where or when the awful plague of using package-qualified names in code arose: it’s not something people used to do, but it seems to happen really a lot now. I think it may be because people also tend to do this in Python and other dotty languages, although, significantly, in Python you never actually need to do this if you bother, once again, to actually go to the work of constructing the namespace you want: rather than the awful</p>
<pre class="brush: python"><code>import sys
... sys.argv ...
...
sys.exit(...)</code></pre>
<p>you can simply say</p>
<pre class="brush: python"><code>from sys import argv, exit
... argv ...
exit(...)</code></pre>
<p>and now the very top of your module lets anyone reading it know exactly what functionality you are importing and from where it comes.</p>
<p>It may also be because the whole constructing namespaces thing is a bit hard. Yes, it is indeed a bit hard, but designing programs, of which it is a small but critical part, <em>is</em> a bit hard.</p>
<p>OK, enough.</p>
<hr />
<p>If, after reading the above, you think you should mail me about how wrong it all is and explain some detail of the CL package system to me: don’t, I do not want to hear from you. Really, I don’t.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2022-10-14-package-local-nicknames-footnote-1-definition" class="footnote-definition">
<p>in particular, if your argument is that someone has used, for instance, the name <code>set</code> in some package to mean, for instance, a set in the sense it is used in maths, and that this clashes with <code>cl:set</code> and perhaps some other packages, don’t. If you are writing a program and you think, ‘I know, I’ll use a symbol with the same name as a symbol exported from CL to mean something else’ in a context where users of your code also might want to use the symbol exported by CL (which in the case of <code>cl:set</code> is ‘almost never’, of course), then my shark pool is just over here: please throw yourself in. <a href="#2022-10-14-package-local-nicknames-footnote-1-return">↩</a></p></li>
<li id="2022-10-14-package-local-nicknames-footnote-2-definition" class="footnote-definition">
<p>Curiously, I think that quote was about Scheme, which I am sure Erik hated. But, for instance, Racket’s module system lets you do just the things which are hard in the package system: renaming things on import, for instance. <a href="#2022-10-14-package-local-nicknames-footnote-2-return">↩</a></p></li></ol></div>Bradshaw's lawsurn:https-www-tfeb-org:-fragments-2022-10-03-bradshaw-s-laws2022-10-03T19:50:51Z2022-10-03T19:50:51ZTim Bradshaw
<p>There are two laws.</p>
<!-- more-->
<h2 id="the-laws">The laws</h2>
<ol>
<li><strong>Bradshaw’s law.</strong> All sufficiently large software systems end up being programming languages.</li>
<li><strong>Zyni’s corollary.</strong> Whenever you think the point is at which the first law will apply, it will apply before that.</li></ol>
<h2 id="implications-of-the-laws">Implications of the laws</h2>
<p>When building software systems you should design them as programming langages. You should do this however small you think they will be. In order to make this practical for small systems you should therefore use a language which allows seamless extension into other languages with insignificant zero-point cost.</p>
<p>But because the laws are not widely known, most large software systems are built without understanding that what is being built is in fact a programming language. Because people don’t know they are building a programming language, don’t know how to build programming languages, and do not use languages which make the seamless construction of programming languages easy, the languages they build are usually terrible: they are hard to use, have opaque and inconsistent semantics and are almost always insecure.</p>Simple logging in Common Lispurn:https-www-tfeb-org:-fragments-2022-09-26-simple-logging-in-common-lisp2022-09-26T11:26:32Z2022-09-26T11:26:32ZTim Bradshaw
<p><code>slog</code> is a simple logging framework for Common Lisp based on the observation that conditions can represent log events.</p>
<!-- more-->
<p><code>slog</code> is based on an two observations about the Common Lisp condition system:</p>
<ul>
<li>conditions do not have to represent errors, or warnings, but can just be a way of a program saying ‘look, something interesting happened’;</li>
<li>handlers can decline to handle a condition, and in particular handlers are invoked <em>before the stack is unwound</em>.</li></ul>
<p>Well, saying ‘look, something interesting happened’ is really quite similar to what logging systems do, and <code>slog</code> is built on this idea.</p>
<p><code>slog</code> is the <em>simple</em> logging system: it provides a framework on which logging can be built but does not itself provide a vast category of log severities &c. Such a thing could be built on top of <code>slog</code>, which aims to provide mechanism, not policy.</p>
<p><code>slog</code> provides a couple of conditions representing log entries, which are designed to be subclassed in real life. Log entries are created using a <code>slog</code> function (this is why <code>slog</code> is called <code>slog</code>: <code>log</code> is already taken) which simply signals an appropriate condition. Handlers are set up by a <code>logging</code> form (this should really be called <code>slogging</code> but it is not), which associates conditions with handlers. There is fairly flexible file handling for logging to files, and in particular you can refer to file names which all get associated with the approprate stream, streams get closed automagically (and you can manually close them, when they will be reopened if need be), and the underlying mechanism for writing entries is exposed by a <code>slog-to</code> generic function which could be extended. Log entry formats can be controlled in various ways.</p>
<p>In addition <code>slog</code> tries to associate log entries with ‘precision time’, which is CL’s universal time expanded to the precision of a millisecond, or of internal time if it is less precise than a millisecond. Setting this up means that <code>slog</code> takes a second or so to load.</p>
<p>Once again: <code>slog</code> is a <em>framework</em>: it has no dealings with log severities, catagories, or anything like that. All that is meant to be provided on top of what <code>slog</code> provides.</p>
<p>Documentation is <a href="https://tfeb.github.io/tfeb-lisp-hax/#simple-logging-slog">here</a>, source code is <a href="https://github.com/tfeb/tfeb-lisp-hax">here</a>. It will be available from Quicklisp in due course.</p>Metatronic macrosurn:https-www-tfeb-org:-fragments-2022-09-26-metatronic-macros2022-09-26T10:54:25Z2022-09-26T10:54:25ZTim Bradshaw
<p>Metatronic macros are a simple hack which makes it a little easier to write less unhygienic macros in Common Lisp.</p>
<!-- more-->
<p>Common Lisp macros require you to avoid variable name capture yourself. So, for a macro which iterates over the lines in a file, this is wrong:</p>
<pre class="brush: lisp"><code>(defmacro with-file-lines ((line file) &body forms)
;; wrong
`(with-open-file (in ,file)
(do ((,line (read-line in nil in)
(read-line in nil in)))
((eq ,line in))
,@forms)))</code></pre>
<p>It’s wrong because it binds <code>in</code> to the stream open to the file, and user code could perfectly legitimately refer to a variable of the same name.</p>
<p>The standard approach to dealing with this is to use gensyms:</p>
<pre class="brush: lisp"><code>(defmacro with-file-lines ((line file) &body forms)
;; righter
(let ((inn (gensym)))
`(with-open-file (,inn ,file)
(do ((,line (read-line ,inn nil ,inn)
(read-line ,inn nil ,inn)))
((eq ,line inn))
,@forms))))</code></pre>
<p>This creates a new symbol bound to <code>inn</code> (<code>in</code>’s name), and then uses it as the name of the variable bound to the stream. Code can’t then use any variable with this unique name.</p>
<p>This works, but it’s ugly. Metatronic macros let you write the above like this:</p>
<pre class="brush: lisp"><code>(defmacro/m with-file-lines ((line file) &body forms)
;; righter, easier
`(with-open-file (<in> ,file)
(do ((,line (read-line <in> nil <in>)
(read-line <in> nil <in>)))
((eq ,line <in>))
,@forms)))</code></pre>
<p>In this macro all symbols which look like <code><</code>…<code>></code> (in any package) are rewritten to unique names, but all references to symbols with the same original name are to the same symbol<sup><a href="#2022-09-26-metatronic-macros-footnote-1-definition" name="2022-09-26-metatronic-macros-footnote-1-return">1</a></sup>. This makes this common case more pleasant to do: macros written using <code>defmacro/m</code> have less noise around their expansion.</p>
<p>Metatronic macros go to some lengths to avoid leaking the rewritten symbols. Given this silly macro</p>
<pre class="brush: lisp"><code>(defmacro/m silly ()
''<silly>)</code></pre>
<p>then <code>(eq (silly) (silly))</code> is false. Similarly given this:</p>
<pre class="brush: lisp"><code>(defmacro/m also-silly (f)
`(eq ,f '<silly>))</code></pre>
<p>Then <code>(also-silly '<silly>)</code> will be false of course.</p>
<p>There is <code>defmacro/m</code>, <code>macrolet/m</code> and <code>define-compiler-macro/m</code>, and the implementation of metatronization is exposed if you need it.</p>
<p>Documentation is <a href="https://tfeb.github.io/tfeb-lisp-hax/#metatronic-macros">here</a>, source code is <a href="https://github.com/tfeb/tfeb-lisp-hax">here</a>. It will be available in Quicklisp in due course.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2022-09-26-metatronic-macros-footnote-1-definition" class="footnote-definition">
<p>in fact, a symbol whose name is <code><></code> is rewritten as a unique gensym as a special case. I am not sure if this is a good thing but it’s what happens. <a href="#2022-09-26-metatronic-macros-footnote-1-return">↩</a></p></li></ol></div>How did we get here?urn:https-www-tfeb-org:-fragments-2022-08-31-how-did-we-get-here2022-08-31T08:39:34Z2022-08-31T08:39:34ZTim Bradshaw
<p>I don’t understand how the UK got onto its current death march, or where that death march will end. Here are some ideas which are worth what you paid for them.</p>
<!-- more-->
<h2 id="a-death-march">A death march</h2>
<p>In the middle of 2022, the UK is watching a competition between Rishi Sunak and Liz Truss to replace Boris Johnson as prime minister of the UK. Johnson is an incompetent, narcissistic liar who is and always was grossly unfit for any high office. Sunak is a plutocrat: he is a man who has never used a contactless payment card, presumably because he has servants who do that for him, a man who doesn’t know the cost of bread, and who pretends to take his children to MacDonalds to buy a meal which has not been available for two years. Liz Truss is merely very stupid: it is curiously difficult to discover what class of degree she
<s>bought</s>was awarded.</p>
<p>Both Sunak and Truss served in Johnson’s cabinet: Sunak did, at least, eventually resign, triggering the cascade of resignations which finally lead to Johnson’s downfall. Truss did not resign: she is a Johnson loyalist who seems to think that Johnson almost literally dancing on the graves of the people of the UK was just fine, not to mention his incompetence, endless lying and theft from the country is just fine. Liz Truss is, in fact, Continuity Johnson.</p>
<p>Sunak and Truss are not competing for the votes of the UK electorate: they are competing for the votes of a tiny number of conservative party members who have paid for the privilege of selecting the prime minister. These people are overwhelmingly well-off, old if not actually senile, white, and male. Most of them live in the south-east of the UK. These are people who read newspapers, on paper, and they have the views you would expect them to have: they’re right-wing racists who look back fondly on an imagined golden age of the 1980s and before. They think anthropogenic climate change, if not actually a lie, is something that will matter only after they are dead (this is true: it will matter mostly after they are dead) and since they value their comfort far above the lives of their children and grandchildren they are happy to do nothing about it. They don’t like cyclists, feminists, people who are not white, gypsies and travellers, poor people, immigrants, and so on. ‘Woke’ — a term which means ‘being decent to other people’ — is anathama to them: they do not want to have to behave decently to other people, especially not people who look different than them. They are, in other words, exactly who you would expect them to be.</p>
<p>This group is <em>extremely</em> unrepresentative of the population of the UK, and increasingly so. And they know this, at least dimly. They know if they are to continue their lives of comfort then something must be done about this awkward fact: something must be done about democracy.</p>
<p>Both Sunak and Truss are working hard to appeal to this group: lurching ever further to the right and ever further away from democracy. Probably they will not succeed in levels of voter suppression sufficient to ensure their long-term survival, although they may. But whoever wins will actively and intentionally do vast damage to the UK in the next two years, and will certainly do nothing about climate change. And by the time the victor leaves power — if they leave power — it will be too late: too late to rescue the UK as a serious country, and too late for any concerted, international action to address climate change which must happen <em>now</em>.</p>
<p>Finally, of course, it is almost inevitable that Truss will win: tory party members are racists, and she is white and blonde, while Sunak is not. Both would be terrible prime ministers, but Sunak might at least be competent:</p>
<blockquote>
<p>The Tory party itself is quite rotten now and the sign of that is that they can’t think of anyone better than Boris, who’s clearly just completely shot. They are collectively saying, “if we get rid of him, we might get somebody worse”. It says a lot about the state of the Tory party. And they actually could get somebody worse: Liz Truss would be even worse than Boris. She’s about as close to properly crackers as anybody I’ve met in Parliament. — <a href="https://unherd.com/2022/05/dominic-cummings-i-dont-like-parties/">Dominic Cummings</a></p></blockquote>
<p>The future for the UK is not bright.</p>
<h2 id="no-easy-answers">No easy answers</h2>
<p>It’s tempting to say that, well, it’s brexit: this is what was always going to happen after brexit. I don’t think that’s true: brexit was certainly a bad idea, but it didn’t have to be anything like <em>this</em> terrible.</p>
<p>Brexit was always going to be extremely challenging<sup><a href="#2022-08-31-how-did-we-get-here-footnote-1-definition" name="2022-08-31-how-did-we-get-here-footnote-1-return">1</a></sup> to implement in a way which was not a catastrophe, which should not have been surprising to anyone. However it does seem to have been surprising to a lot of the politicians who were so desperate for brexit. They had no plan, at all, for how it should be implemented. Why not? Why did the very people who wanted brexit so much have no plans?</p>
<p>Well, I think there are three or four plausible reasons.</p>
<ol>
<li>They didn’t understand that brexit would be complicated, because they were not terribly smart. Smart people, after all, understand that it is often best to quietly abandon goals which are extremely complicated and risky to achieve<sup><a href="#2022-08-31-how-did-we-get-here-footnote-2-definition" name="2022-08-31-how-did-we-get-here-footnote-2-return">2</a></sup>, even if they are much-desired: brexiteers did not.</li>
<li>They did not expect to win, so having a plan for winning was not seen as something they needed to do.</li>
<li>They expected that other people would plan for them. The motivation for brexit has always been mostly about resentment: somehow <em>other people</em> are always the problem, in the case of brexit those other people are the EU and foreigners generally. And, like children, they then expect the other people to solve the problem for them.</li>
<li>Perhaps brexiteers <em>wanted</em> a catastrophe because they thought it would give them a route to wealth and power. People do suggest this, usually under the rubric of ‘disaster capitalism’: I think it’s not very plausible.</li></ol>
<p>Between them, I think these do explain what happened.</p>
<h2 id="the-day-after-judgement">The day after judgement</h2>
<p>On midsummer day, 2016, the brexiteers faced an inconvenient truth: they had won. Now, instead of sitting around whining, they had to do something.</p>
<p>You might think that the sensible thing to do would have been to say that implementing brexit was going to be extremely complex, make some excuse about why they had made no plans, and explain that it would thus take a long time. But they couldn’t do that: they knew very well that the brexit vote was driven by older people: if it took a decade or so to be ready to actually leave the EU then enough of those people would be dead that it would be clear that brexit was being implemented against the clear will of the majority of voters. There would at the very least be strong pressure for another referendum, which they would lose.</p>
<p>So, if they were going to succeed in their stated goal, brexit had to happen rather quickly. But they had no plans: they were in a serious bind.</p>
<h2 id="the-phoney-war">The phoney war</h2>
<p>I’m not going to write some long, boring, and probably wrong, description of what happened between the referendum and Theresa May’s resignation. Enough to say that this was the period when it became clear even to people who had not being paying attention that implementing brexit was somewhere between hard and impossible. Perhaps the most interesting question is why May invoked article 50 as soon as she did: my guess is that she believed that, if she delayed, the brexiteers would destroy the tory party. Probably she was right.</p>
<p>But the brexiteers destroyed the tory party anyway.</p>
<h2 id="the-church-of-the-subgenius">The church of the subgenius</h2>
<p>After the phoney war everyone who knew anything knew there was now now hope of a good answer to brexit, and that things were therefore going to get much worse in the UK. This really left two-and-a-half sorts of people interested in running the country:</p>
<ul>
<li>people who were too stupid to understand this;</li>
<li>people who did not care;</li>
<li>and perhaps a foolish few who still sought to minimise the damage<sup><a href="#2022-08-31-how-did-we-get-here-footnote-3-definition" name="2022-08-31-how-did-we-get-here-footnote-3-return">3</a></sup>.</li></ul>
<p>What we got was Boris Johnson: the worst of all possible worlds. Johnson certainly does not care about the consequences of brexit for the UK as there is only one thing Johnson cares about: Johnson. He is often portrayed as brilliant but indolent: he is certainly indolent but he’s very far from brilliant. in 2016 he was too stupid to realise that the poisoned chalice of brexit would poison him as well; in 2020 he was too stupid to understand that a pandemic whose doubling time was three days required action to be taken extremely quickly, and too stupid to realise that, since phone cameras exist, holding drunken parties during lockdowns was not going to end well for him.</p>
<p>But he was not just stupid: he was a narcissist who regarded himself as a very god amongst men. He was not about to put up with dissent, or people who might cause him to suspect, however dimly, that they might be smarter than him as, for Johnson, there could be nobody smarter than Johnson. So talent was systematically driven away from his cabinet and from the parliamentary party: with Johnson in charge, what was rewarded was only bovine obedience.</p>
<p>And so a generation of competent people were driven out of government.</p>
<h2 id="an-infestation-of-idiots">An infestation of idiots</h2>
<p>After Johnson finally collapsed under the weight of his own arrogance and stupidity who then was left to take his place? The tory party has long been known as the stupid party<sup><a href="#2022-08-31-how-did-we-get-here-footnote-4-definition" name="2022-08-31-how-did-we-get-here-footnote-4-return">4</a></sup>: after successive episodes of defenestrations of anyone who expressed independent ideas, that statement was now true. Only stupid people remained.</p>
<p>In particular I think the notion that the tories are somehow conspiring with some unspecified group of financiers to enrich themselves (the fourth possible reason for there being no plan for brexit above) is really pretty implausible. You only have to look at them: these people are idiots suffering from Dunning-Kruger syndrome, not evil geniuses.</p>
<p>It <em>is</em> possible, of course, that, while they are idiots, they are somebody’s <em>useful</em> idiots. But they are still idiots. Dark forces may perhaps be conspiring to get rich from the destruction of the UK, but if they are doing so they are not doing so with the knowledge of the halfwit clowns in the tory party who are, in fact, just what they appear to be.</p>
<p>In any case, the selection of someone to replace Johnson could only be made from the group of people who had not either left or been driven out by Johnson: from last remaining dregs of the tory party. A choice to be made from the dim, by the dim. And thus we have a competition between Liz Truss and Rishi Sunak.</p>
<blockquote>
<p>We wait. We are bored. No, don’t protest, we are bored to death, there’s no denying it. Good. A diversion comes along and what do we do? We let it go to waste. Come, let’s get to work. In an instant all will vanish and we’ll be alone once more, in the midst of nothingness.</p></blockquote>
<hr />
<h2 id="addendum-liz-truss">Addendum: Liz Truss</h2>
<p>When I wrote the above it was clear Truss would win, although she had not yet won. But I had no real idea what a spectacular catastrophe she would be: obviously I knew that she’s very stupid, but I don’t think I had any real appreciation just how stupid she would turn out to be.</p>
<p>I don’t know, now, what the best hope for the UK is: that she remain in power and lead the tories to a landslide defeat, or that she is evicted promptly and we have someone — anyone, almost — who will do less damage in the next couple of years.</p>
<p>The future for the UK really <em>isn’t</em> bright, is it?</p>
<hr />
<div class="footnotes">
<ol>
<li id="2022-08-31-how-did-we-get-here-footnote-1-definition" class="footnote-definition">
<p>If not impossible: it’s very hard to see what could have been done about the Northern Ireland / Eire problem without serious damage to the <a href="https://en.wikipedia.org/wiki/Good_Friday_Agreement">Good Friday agreement</a>. <a href="#2022-08-31-how-did-we-get-here-footnote-1-return">↩</a></p></li>
<li id="2022-08-31-how-did-we-get-here-footnote-2-definition" class="footnote-definition">
<p>I’d really like to move to a flat in London but I have too much stuff and too many entanglements and getting from here to there is just absurdly hard. So, however much I might want to move, I understand that it’s not really possible. <a href="#2022-08-31-how-did-we-get-here-footnote-2-return">↩</a></p></li>
<li id="2022-08-31-how-did-we-get-here-footnote-3-definition" class="footnote-definition">
<p>Rory Stewart, perhaps. <a href="#2022-08-31-how-did-we-get-here-footnote-3-return">↩</a></p></li>
<li id="2022-08-31-how-did-we-get-here-footnote-4-definition" class="footnote-definition">
<p>A quote attributed to John Stuart Mill. <a href="#2022-08-31-how-did-we-get-here-footnote-4-return">↩</a></p></li></ol></div>Macros (from Zyni)urn:https-www-tfeb-org:-fragments-2022-08-27-macros-from-zyni2022-08-27T10:12:33Z2022-08-27T10:12:33ZTim Bradshaw
<blockquote>
<p>It is the business of the future to be dangerous; and it is among the merits of science that it equips the future for its duties. — Alfred Whitehead</p></blockquote>
<!-- more-->
<p>Once upon a time, long ago in a world far away, Lisp had many features which other languages did not have. Automatic storage management, dynamic typing, an interactive environment, lists, symbols … and macros, which allow you to seamlessly extend the language you have into the language you want and need.</p>
<p>But that was long long ago in a world far away where giants roamed the earth, trolls lurked under every bridge and, they say, gods yet lived on certain distant mountains.</p>
<p>Today, and in this world, many many languages have automatic storage management, are dynamically typed, have symbols, lists, interactive environments, and so and so and so. More of these languages arise from the thick, evil-smelling sludge that coats every surface each day: hundreds, if not thousands of them, like flies breeding on bad meat which must be swatted before they lay their eggs on your eyes.</p>
<p>Lisp, today and in this world not another, has <em>exactly one</em> feature which still distinguishes it from the endless buzz of these insect languages. That feature is seamless language extension by macros.</p>
<p>So yes, macros are dangerous, and they are hard and they are frightening. They are dangerous and hard and frightening because all powerful magic is dangerous and hard and frightening. They are dangerous because they are a thing which has escaped here from the future and it is the business of the future to be dangerous.</p>
<p>If macros are too dangerous, too hard and too frightening for you, <em>do not use Lisp</em> because <em>macros are what Lisp is about</em>.</p>
<hr />
<p>This originated as a comment by my friend Zyni: it is used with her permission.</p>Two simple pattern matchers for Common Lispurn:https-www-tfeb-org:-fragments-2022-07-21-two-simple-pattern-matchers-for-common-lisp2022-07-21T09:17:45Z2022-07-21T09:17:45ZTim Bradshaw
<p>I’ve written two pattern matchers for Common Lisp:</p>
<ul>
<li><code>destructuring-match</code>, or <code>dsm</code>, is a <code>case</code>-style construct which can match <code>destructuring-bind</code>-style lambda lists with a couple of extensions;</li>
<li><code>spam</code>, the simple pattern matcher, does not bind variables but lets you match based on assertions about, for instance, the contents of lists.</li></ul>
<p>Both <code>dsm</code> and <code>spam</code> strive to be simple and correct.</p>
<!-- more-->
<h2 id="simplicity">Simplicity</h2>
<p>Both <code>dsm</code> and <code>spam</code> are <em>simple</em>: they do exactly one thing, and try to do that one thing well.</p>
<p>You could think of <code>dsm</code> as being to some other CL pattern matchers as Unix once was to Multics: <code>dsm</code> is the result of me looking at those other systems and thinking ‘please, not that’.</p>
<p>Those systems are vast, have several levels, and are extensible: some subset of them might do what I wanted to be able to do — make writing macros less unpleasant — but I’m not sure<sup><a href="#2022-07-21-two-simple-pattern-matchers-for-common-lisp-footnote-1-definition" name="2022-07-21-two-simple-pattern-matchers-for-common-lisp-footnote-1-return">1</a></sup>. They are obsessed with performance.</p>
<p><code>dsm</code> does one thing, and exports a single macro. If you know how to use <code>destructuring-bind</code> and <code>case</code> you already know almost all there is to know about <code>dsm</code>: it’s a <code>case</code> construct whose cases are <code>destructuring-bind</code> lambda lists. <code>dsm</code> doesn’t care about performance at all, because macroexpansion performance never matters.</p>
<p>At least one of those matchers has almost as many commits in its repo as dsm has lines of code.</p>
<p>Like Multics was, those hairy pattern matchers are fine systems. But there was a good reason that Thompson and Ritchie wrote something very different<sup><a href="#2022-07-21-two-simple-pattern-matchers-for-common-lisp-footnote-2-definition" name="2022-07-21-two-simple-pattern-matchers-for-common-lisp-footnote-2-return">2</a></sup>.</p>
<h2 id="destructuring-match--dsm"><code>destructuring-match</code> / <code>dsm</code></h2>
<p>in CL <code>destructuring-bind</code> and, mostly equivalently, macro argument lists are both a blessing and a curse. They’re a blessing because they support destructuring, so you can write, for instance</p>
<pre class="brush: lisp"><code>(defmacro with-foo ((var &optional init) &body forms)
...)</code></pre>
<p>They’re a curse because they are so fragile: <code>with-foo</code> can <em>only</em> support that syntax and will fail with an ugly error message from the implementation when it is fed anything else.</p>
<p>Writing robust macros in CL, especially macros which expect various different argument patterns, then turns into a great saga of manually checking argument patterns before using <code>destructuring-bind</code> to actually bind things. The result of that, of course, is that very many CL macros are not robust and have terrible error reporting.</p>
<p><code>destructuring-match</code> does away with all this unpleasentness. It supports a slightly extended version of the lambda lists that <code>destructuring-bind</code> supports, has ‘guard’ clauses which allow additional checks, and will match a form against any number of lambda lists until one matches, with a fallback case.</p>
<p>As an example here is a version of <code>with-foo</code> which allows two patterns:</p>
<pre class="brush: lisp"><code>(defmacro with-foo (&body forms)
(destructuring-match forms
(((var &optional init) &body body)
(:when (symbolp var))
...)
((((var &optional type) &optional init) &body body)
(:when (symbolp var))
...)
(otherwise
(error ...))))</code></pre>
<p>The guard clauses check that <code>var</code> is a symbol before the match succeeds, and will therefore ensure that the second match is the one chosen for <code>(with-foo ((x y) 1) ...)</code>.</p>
<p><code>destructuring-match</code> also supports ‘blank’ variables: any variable whose name is <code>_</code> (in any package) is ignored, and all such variables are distinct. So for instance</p>
<pre><code>(destructuring-match l
((_ _ _) ...))</code></pre>
<p>will match if <code>l</code> is a proper list with exactly three elements.</p>
<p>Using <code>destructuring-match</code> it’s easy to write this macro<sup><a href="#2022-07-21-two-simple-pattern-matchers-for-common-lisp-footnote-3-definition" name="2022-07-21-two-simple-pattern-matchers-for-common-lisp-footnote-3-return">3</a></sup>:</p>
<pre class="brush: lisp"><code>(defmacro define-matching-macro (name &body clauses)
(let ((<whole> (make-symbol "WHOLE"))
(<junk> (make-symbol "JUNK")))
(destructuring-match clauses
((doc . the-clauses)
(:when (stringp doc))
`(defmacro ,name (&whole ,<whole> &rest ,<junk>)
,doc
(destructuring-match ,<whole> ,@the-clauses)))
(the-clauses
`(defmacro ,name (&whole ,<whole> &rest ,<junk>)
(destructuring-match ,<whole> ,@the-clauses))))))</code></pre>
<p>And this then allows the above <code>with-foo</code> macro to be written like this:</p>
<pre class="brush: lisp"><code>(define-matching-macro with-foo
((_ (var &optional init) &body forms)
(:when (symbolp var))
...)
((_ ((var &optional type) &optional init) &body forms)
(:when (symbolp var))
...)
(form
(error "~S is bad syntax for with-foo" form)))</code></pre>
<p><code>dsm</code> was not written with performance in mind but it seems to be, typically, around a tenth to a half the speed of <code>destructuring-bind</code> while being far more powerful of course.</p>
<p><code>dsm</code> can be found <a href="https://tfeb.github.io/#destructuring-match-for-common-lisp">here</a>. It will probably end up in Quicklisp in due course but currently it isn’t there, and some of its dependencies are also not up to date there.</p>
<h2 id="spam-the-simple-pattern-matcher"><code>spam</code>, the simple pattern matcher</h2>
<p><code>dsm</code> has a lot of cases where it needs to check what the lambda list it is parsing and compiling looks like. To do this I wrote a bunch of predicate constructors and combinators, which return predicates which will check things. So for example:</p>
<ul>
<li><code>(is 'foo)</code> returns a function which checks its argument is <code>eql</code> to <code>foo</code>;</li>
<li><code>(some-of p1 ... pn)</code> returns a function of one argument which will succeed if one of the predicates which are its arguments succeeds, so <code>(some-of (is 'foo) (is 'bar))</code>;</li>
<li><code>(head-matches p1 ... pn)</code> will succeed if the predicates which are its arguments succeed on the first elements of a list.</li></ul>
<p>There are several other predicate constructrors and predicate combinators, but <code>spam</code> can use any predicate.</p>
<p>There is then a <code>matches</code> macro which uses these to match things, and a <code>matchp</code> function which simply invokes a predicate.</p>
<p>As an example, here’s part of a matcher for <code>&rest</code> specifications in lambda lists.</p>
<pre class="brush: lisp"><code>(matching ll
((head-matches (some-of (is '&rest) (is '&body))
(var)
(is '&key))
;; &rest x &key ...
...)
((head-matches (some-of (is '&rest) (is '&body))
(var)
(any))
;; &rest x with something else
...)
((list-matches (some-of (is '&rest) (is '&body))
(var))
;; &rest x and no more
...)
(otherwise
(error "oops")))</code></pre>
<p><code>spam</code> is pretty useful, and code written using it is much easier to read than doing the equivalent checks manually. It is used extensively in the implementation of <code>dsm</code>.</p>
<p><code>spam</code> is now one of <a href="https://tfeb.github.io/#some-common-lisp-hacks">my CL hax</a>.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2022-07-21-two-simple-pattern-matchers-for-common-lisp-footnote-1-definition" class="footnote-definition">
<p>At the time of writing <a href="https://github.com/guicho271828/trivia">Trivia</a> supports lambda lists I think, but not destructuring-lambda lists: <code>(match '(1 (1)) ((lambda-list a (b)) (values a b)))</code> will fail, for instance. I don’t know whether is it <em>meant</em> to support destructuring lambda lists — comments in the sources imply it is, but it clearly does not in fact. <a href="#2022-07-21-two-simple-pattern-matchers-for-common-lisp-footnote-1-return">↩</a></p></li>
<li id="2022-07-21-two-simple-pattern-matchers-for-common-lisp-footnote-2-definition" class="footnote-definition">
<p>I am aware of <a href="https://dreamsongs.com/WIB.html">Gabriel’s ‘worse is better’ paper</a> and its various afterthoughts. <code>dsm</code> is not like that: it is smaller and simpler, but is not intended to be worse. <code>dsm</code> is to these other systems perhaps as Scheme was to CL. Gabriel also talks about these two options, of course. <a href="#2022-07-21-two-simple-pattern-matchers-for-common-lisp-footnote-2-return">↩</a></p></li>
<li id="2022-07-21-two-simple-pattern-matchers-for-common-lisp-footnote-3-definition" class="footnote-definition">
<p>Note this macro is 12 lines, half of which are handling the possible docstring. <a href="#2022-07-21-two-simple-pattern-matchers-for-common-lisp-footnote-3-return">↩</a></p></li></ol></div>Macroexpansion in Common Lispurn:https-www-tfeb-org:-fragments-2022-07-05-macroexpansion-in-common-lisp2022-07-05T15:16:29Z2022-07-05T15:16:29ZTim Bradshaw
<p>Yet another description of macroexpansion in Common Lisp. There is nothing particuarly new here and it partly duplicates some previous articles: I just wanted to rescue the text.</p>
<!-- more-->
<p>The following description is of how macroexpansion works in Common Lisp<sup><a href="#2022-07-05-macroexpansion-in-common-lisp-footnote-1-definition" name="2022-07-05-macroexpansion-in-common-lisp-footnote-1-return">1</a></sup>. It is slightly simplified and I have not always mentioned when it is<sup><a href="#2022-07-05-macroexpansion-in-common-lisp-footnote-2-definition" name="2022-07-05-macroexpansion-in-common-lisp-footnote-2-return">2</a></sup>. It is at least a partial duplicate of <a href="../../../../2021/11/11/the-proper-use-of-macros-in-lisp/">this previous article</a>.</p>
<h2 id="what-macros-are">What macros are</h2>
<p><strong>Macros in CL are functions, written in ordinary CL, whose argument is source code, and whose value is other source code.</strong></p>
<p>Source code is represented as s-expressions: symbols, conses, and so on. Macros don’t do string-rewriting.</p>
<p>The way to think slightly more abstractly about macros is that they are <em>functions between languages</em>: a macro is a function which takes as an argument fragments of a language which includes that macro, and returns as a value either a fragment of a language which <em>doesn’t</em> include the macro, or a fragment of a language which includes it in some weaker way.</p>
<p>The aim of macros is to build, on top of the language you are given, another language which is closer to the language in which you want to express your programs. CL itself is one such language, built-up using a number of standard macros on top of a substrate language.</p>
<p>People often think of macros as ‘functions which do not evaluate their arguments’: that’s really not right. They are functions — perfectly ordinary functions, written in CL — but their argument is source code, and their value is source code.</p>
<h2 id="how-macroexpansion-happens">How macroexpansion happens</h2>
<p>[This is simplified.]</p>
<p>Given some initial compound form <code>(m ...)</code>, macroexpansion proceeds like this.</p>
<p><strong>Start.</strong> Given a form, it should be one of</p>
<ul>
<li>a compound form <code>(m ...)</code>,</li>
<li>or a non-compound form.</li></ul>
<p><strong>Compound form.</strong> The form is <code>(m ...)</code></p>
<ol>
<li>Look at <code>m</code>: if it has an associated macro function (found using <code>macro-function</code>) then simply call that function on the whole form <code>(m ...)</code>: its result is a new form<sup><a href="#2022-07-05-macroexpansion-in-common-lisp-footnote-3-definition" name="2022-07-05-macroexpansion-in-common-lisp-footnote-3-return">3</a></sup>. Recurse on this form from <strong>Start</strong>.</li>
<li>If <code>m</code> is not a macro, then it may be a special operator, such as <code>setq</code> or <code>if</code>. Consider appropriate forms in the body of this form for expansion: which forms are known by the rules of the special operator. For instance all the forms in <code>(if ...)</code> are considered for expansion, while in <code>(setq <x> <y>)</code> only <code><y></code> is, and so on.</li>
<li>If it is not a macro and not a special form, then <code>(m ...)</code> is assumed to be a function call, with <code>m</code> denoting a function. All the forms in the body are now considered for macro expansion. Once that is done the expansion process is complete.</li>
<li>As a special case of the last case, <code>m</code> may be <code>(lambda (...) ...)</code>, so the whole form will be <code>((lambda (...) ...) ...)</code>. In this case the forms in the body of the <code>lambda</code> are considered for macroexpansion; otherwise this is the same as the last case<sup><a href="#2022-07-05-macroexpansion-in-common-lisp-footnote-4-definition" name="2022-07-05-macroexpansion-in-common-lisp-footnote-4-return">4</a></sup>.</li>
<li>There are no other cases.</li></ol>
<p><strong>Non-compound form.</strong> There is nothing to do here.</p>
<p>As I said, this is simplified: there are local macros for instance, and various other things. However one critical thing is that when expanding some macro form <code>(m ...)</code>, the expansion carries on until it gets something which is not a macro form <em>before</em> looking at whatever is in the body of the form. That’s critical: although it’s tempting to think that expansion should happen inside-out, it can’t work that way, because until the outer macro has done its work you can’t know if the things in its body even <em>should</em> be candidates for macro expansion. There’s an example of this below.</p>
<h2 id="macros-the-hard-way">Macros the hard way</h2>
<p>OK, I said that macros were just functions, and I meant that. Let’s write a macro <code>with-debugging</code> which is like <code>progn</code> but it will perhaps print what it is doing.</p>
<p>So let’s write the macro function:</p>
<pre class="brush: lisp"><code>(defvar *debugging* t)
(defun expand-with-debugging (form environment)
(declare (ignore environment)) ;I'm not mentioning environments
`(progn
,@(loop for thing in (rest form)
collect `(when *debugging*
(format *debug-io* "~&~S~%" ',thing))
collect thing)))</code></pre>
<p>And we can test it:</p>
<pre class="brush: lisp"><code>> (expand-with-debugging '(with-debugging (cons 1 2) 4) nil)
(progn
(when *debugging* (format *debug-io* "~&~S~%" '(cons 1 2)))
(cons 1 2)
(when *debugging* (format *debug-io* "~&~S~%" '4))
4)</code></pre>
<p>And now we can install it as the macro function for <code>with-debugging</code>:</p>
<pre><code>(setf (macro-function 'with-debugging) #'expand-with-debugging)</code></pre>
<p>And now</p>
<pre><code>> (with-debugging
(cons 1 2)
4)
(cons 1 2)
4
4</code></pre>
<p>Or</p>
<pre><code> (setf *debugging* nil)
nil
> (with-debugging
(cons 1 2)
4)
4</code></pre>
<p>OK, here’s another macro done this way, and purpose of this one is to show you why macroexpansion has to happen outside in. Let’s say we want to be able to denote functions by <code>(fun (arg ...) form ...)</code>, but we’d like to be able to debug the body with <code>with-debugging</code>. We can do that:</p>
<pre><code>(defun expand-fun (form environment)
(declare (ignore environment)) ;still not mentioning environments
`(function (lambda ,(second form)
;; Not dealing with declarations
(with-debugging ,@(cddr form)))))
(setf (macro-function 'fun) #'expand-fun)</code></pre>
<p>And now</p>
<pre class="brush: lisp"><code>> (let ((*debugging* t))
(funcall (fun (a) (+ a a)) 1))
(+ a a)
2</code></pre>
<p>Now you can see why the macro expander has to work the way it does: the first form in the body of <code>fun</code> should not be macroexpanded at all, and the remaining forms are going to get wrapped in a macro which isn’t there in the source at all. So macroexpansion has to go outside in, as described above.</p>
<h2 id="a-better-way">A better way</h2>
<p>Well, you could write macros like that. Probably once they were written like that. But it’s a pain, because you almost never care about the first element of the form — the macros own name — and you have to manually take the rest of the form apart yourself. And also you need to deal with questions about making sure macros are defined at compile time and so on.</p>
<p>That’s what <code>defmacro</code> does. It is itself a macro, and its expansion will involve setting the <code>macro-function</code> of the macro to some appropriate thing. So using <code>defmacro</code> I can write the <code>fun</code> macro:</p>
<pre class="brush: lisp"><code>(defmacro fun ((&rest args) &body forms)
;; still not dealing with declarations
`(function (lambda (,@args) (with-debugging ,@forms))))</code></pre>
<p>This is easier to understand of course. But all it is is a (fairly elaborate!) wrapper around what I did above.</p>
<h2 id="watching-the-detectives">Watching the detectives</h2>
<p>Using <a href="https://tfeb.github.io/tfeb-lisp-hax/#tracing-macroexpansion-trace-macroexpand"><code>trace-macroexpand</code></a> you can watch macroexpansion happen.</p>
<pre><code>> (needs (:org.tfeb.hax.trace-macroexpand :compile t :use t))
; Loading [...]
((:org.tfeb.hax.trace-macroexpand t))
> (trace-macroexpand t)
nil
> (trace-macro fun with-debugging)
> (setf *trace-macroexpand-print-length* nil
*trace-macroexpand-print-level* nil)
nil
> (trace-macro fun with-debugging)
(fun with-debugging)
> (setf *debugging* nil)
nil
> (funcall (fun (a) a) 1)
(fun (a) a)
-> #'(lambda (a) (with-debugging a))
(with-debugging a)
-> (progn (when *debugging* (format *debug-io* "~&~S~%" 'a)) a)
(with-debugging a)
-> (progn (when *debugging* (format *debug-io* "~&~S~%" 'a)) a)
1</code></pre>
<p>Note that <code>with-debugging</code> is expanded twice: this is an artifact of the implementation: there’s no promise that macros only get expanded once in interpreted code.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2022-07-05-macroexpansion-in-common-lisp-footnote-1-definition" class="footnote-definition">
<p>This was once going to be a Stack Overflow answer, and I didn’t want to throw it away. <a href="#2022-07-05-macroexpansion-in-common-lisp-footnote-1-return">↩</a></p></li>
<li id="2022-07-05-macroexpansion-in-common-lisp-footnote-2-definition" class="footnote-definition">
<p>And of course I might just be wrong about some details. <a href="#2022-07-05-macroexpansion-in-common-lisp-footnote-2-return">↩</a></p></li>
<li id="2022-07-05-macroexpansion-in-common-lisp-footnote-3-definition" class="footnote-definition">
<p>I am not talking about the environment objects which get passed to macro functions. <a href="#2022-07-05-macroexpansion-in-common-lisp-footnote-3-return">↩</a></p></li>
<li id="2022-07-05-macroexpansion-in-common-lisp-footnote-4-definition" class="footnote-definition">
<p>Another way of thinking about <code>((lambda (...) ...) ...)</code> is that is is the same as <code>(funcall (function (lambda (...) ...)) ...)</code> and, since <code>function</code> is a special operator, its rules apply, and include expanding the forms in the body of the <code>(lambda (...) ...)</code> form (and of course <code>lambda</code> is itself a macro, so <code>(lambda (...) ...)</code> expands to <code>(function (lambda (...) ...)))</code> and then the rules for <code>function</code> apply again). I am old enough to remember adding the macro for <code>lambda</code> to various antique CLs. <a href="#2022-07-05-macroexpansion-in-common-lisp-footnote-4-return">↩</a></p></li></ol></div>More on UK retail energy pricesurn:https-www-tfeb-org:-fragments-2022-05-24-more-on-uk-retail-energy-prices2022-05-24T12:36:50Z2022-05-24T12:36:50ZTim Bradshaw
<p>Three days ago I pointed out that the UK government was lying about the influence of the war in Ukraine on UK retail energy prices. Now we have a better idea what that influence might actually be.</p>
<!-- more-->
<p>The UK government has been <a href="../../../../2022/05/21/the-uk-government-is-lying-about-energy-prices/">lying that the current retail energy cost is largely due to the war in Ukraine</a>. But on 24th May 2022, the head of Ofgem <a href="https://www.bbc.co.uk/news/business-61562657">told MPs that the energy price cap was likely to rise to £2,800 from the 1st October 2022</a>, and these predicted rises <em>may</em> be largely due to the war in Ukraine.</p>
<p>Here are details of the cap at various dates, based on <a href="https://www.ofgem.gov.uk/publications/price-cap-increase-ps693-april">Ofgem</a> and <a href="https://www.bbc.co.uk/news/business-61562657">the BBC report</a>:</p>
<ul>
<li>2022, to 30th March 2022: £1,277;</li>
<li>1st April 2022 to 30th September 2022: £1,971, increasing by £693 (differences due to rounding) or 54%;</li>
<li>from 1st October 2022 (predicted on 24th May 2022): £2,800, increasing by $829, or 42%, or a cumulative increase of 119%.</li></ul>
<p>So, in 185 days, retail prices will have gone up by 119% – in other words they have gone up by a factor of 2.19, more than double – of which predicted increase rather more than half may be largely due to the war in Ukraine (again: <a href="(../../../../2022/05/21/the-uk-government-is-lying-about-energy-prices/)"><em>none</em> of the current retail price is due to the war in Ukraine</a>.</p>
<p>Given the <a href="https://www.economist.com/leaders/2022/05/19/the-coming-food-catastrophe">coming food catastrophe</a>, which <em>is</em> largely a result of the war in Ukraine, and the grotesque incompetence of the UK government, people will probably both die of cold, and starve in the UK in the winter of 2022–2023. For the second time in a little over two years, the UK government will have failed in its most basic task: keeping its citizens alive.</p>The UK government is lying about energy pricesurn:https-www-tfeb-org:-fragments-2022-05-21-the-uk-government-is-lying-about-energy-prices2022-05-21T14:22:17Z2022-05-21T14:22:17ZTim Bradshaw
<p>The UK government would like you to believe that the recent increases in the rate people pay for energy are due to the war in Ukraine. This is a lie.</p>
<!-- more-->
<p><a href="https://www.bbc.co.uk/news/business-12196322">This article from the BBC news</a> contains the following statement:</p>
<blockquote>
<p>Energy bills are the biggest contributor to inflation at present, largely because of the impact of the Ukraine war on oil and gas prices. After a rise in the UK’s energy price cap last month, average gas and electricity prices jumped by 53.5% and 95.5% respectively compared with a year ago.</p></blockquote>
<p>Note in particular that they claim that energy bills, which are indeed the largest contributor to inflation, have risen ‘largely because of the impact of the Ukraine war on oil and gas prices’. In this the BBC is, I am sure innocently, simply repeating what the UK government wants us to believe.</p>
<p>But this statement is false: the UK government is lying, again. It is lying in order to make it less obvious that it is both grossly incompetent and simply does not care about the people of the UK.</p>
<p>First of all this statement should immediately make you wonder: what happened in 2021? I got a message from my former energy supplier on 14th October 2021, 133 days before Russia invaded Ukraine, which reads in part</p>
<blockquote>
<p>Due to the global energy crisis, record high wholesale energy costs, and the restrictions placed on us by the Ofgem Price Cap, we are sadly unable to keep operating [supplier]. […] The Government and Ofgem, our regulator, expects [supplier] to sell energy at a price much less than it currently costs to buy.</p></blockquote>
<p>My former supplier, along with many other suppliers, went out of business in 2021: that means <strong>the energy crisis was well underway by 2021</strong>.</p>
<p>But perhaps there is an escape: there was an energy crisis by 2021, yes, but perhaps the current crisis is still <em>largely</em> due to the war in Ukraine.</p>
<p>That also is false. Look at <a href="https://www.ofgem.gov.uk/publications/default-tariff-cap-level-1-april-2022-30-september-2022">this document</a>, in which Ofgem announced the increase in the tariff cap, and in particular look at the <a href="https://www.ofgem.gov.uk/sites/default/files/2022-02/Default%20tariff%20cap%20letter%20for%201%20April%2020221643903154554.pdf">attached letter</a> (PDF link). From that letter, you can read this:</p>
<blockquote>
<p>To all market participants and interested parties […] The level of the cap for the cap period eight (1 April 2022 to 30 September 2022) has increased by 54% since the last update. From 1 April 2022, the level of the cap will increase to £1,971.</p></blockquote>
<p>This letter is announcing the rise in the tariff cap described by the BBC above: about 54%. It is dated <strong>3rd February 2022</strong>: almost a month before Russia invaded Ukraine.</p>
<p>And indeed on 24th February 2022, coincidentally the day Russia invaded Ukraine, I received a letter from my current energy supplier in which the folloing text appears:</p>
<blockquote>
<p>Our prices are changing to reflect high wholesale energy costs in line with Ofgem’s latest price cap review.</p>
<p>[…]</p>
<p>Your current tariff [for electricity and gas]: £ <em>T</em>. Your new tariff: £ <em>T</em> × 1.4 […] Your electricity rates will change from 21.607p to 28.408p per kWh and your standing charge per day will change from 25.66p to 51.62p. Your gas rates will change from 4.197p to 7.476p per kWh and your standing charge per day will change from 26.11p to 27.22p</p></blockquote>
<p>[I have replaced the specific amounts I pay by <em>T</em> above.]</p>
<p><em>That capped tariff is the tariff I, along with almost everyone else, is currently paying</em>. The new cap was set in <em>early February 2022</em> for implementatation on 1 April 2022. The war in Ukraine has had no influence on this cap, because it was agreed well in advance of the war.</p>
<p>In summary: <strong>The war in Ukraine has not yet influenced retail energy prices in the UK because the cap was decided in early February 2022 at the latest. To say otherwise is to spread misinformation.</strong></p>
<hr />
<p>Of course, the war in Ukraine <em>will</em> influence retail energy prices: the current cap runs until 30 September 2022. It is safe to say that when it expires there will be some very bad news indeed about UK retail energy prices. Combined with the certain large increases in the price of staple foods, the winter of 2022 is going to be extremely unpleasant: I think it not unlikely that significant numbers of people in the UK may well start to starve to death.</p>
<p>But this has not happened yet.</p>An unsent letter to Mel Stride, MPurn:https-www-tfeb-org:-fragments-2022-05-03-an-unsent-letter-to-mel-stride-mp2022-05-03T10:23:12Z2022-05-03T10:23:12ZTim Bradshaw
<p><em>They are corrupt, they have done abominable works, there is none that doeth good.</em></p>
<!-- more-->
<p>In the last week<sup><a href="#2022-05-03-an-unsent-letter-to-mel-stride-mp-footnote-1-definition" name="2022-05-03-an-unsent-letter-to-mel-stride-mp-footnote-1-return">1</a></sup>:</p>
<ul>
<li>Neil Parish, a tory MP, has been caught, twice, watching pornography on his phone in the house of commons;</li>
<li>an anonymous source, but almost certainly a tory MP, has told misogynistic, false, stories about a senior, female, labour figure to a newspaper which duly published them;</li>
<li>Liam Byrne, a labour MP has been found to have bullied his staff and suspended;</li>
<li>Boris Johnson, prime minister and criminal, told more lies (although this is hardly news);</li>
<li>we’ve learned that 56 MPs (not all tories) are under investigation for sexual misconduct.</li></ul>
<p>Let’s take that last figure. Presumably not all of those under investigation will be found to have done whatever it is they have been accused of, and some of those actually won’t have done it. But given the very obvious culture of bullying in the house of commons and the pervasive lying, at least by senior tories, there are probably many other people too frightened to come forward. So let’s say that it’s a round 65 people all in. So, plausibly, one MP in ten has been sexually abusing people.</p>
<p>And that’s not the end: for each person who was doing this, how many others knew but did nothing? Based on my own personal experience as someone who knew but did nothing the answer is ‘several’, so let’s say two. If that’s correct (and it’s perhaps low if anything) it means about <em>one MP in three</em> was plausibly either sexually abusing people or knew others who were and chose to do nothing about it.</p>
<p>Let’s leave the bullying, the lying, the bribery, corruption and all the other manifold abuses which infest the UK’s politics for another day: this letter is already too long.</p>
<p>And while all this has been going on more than half of the MPs in parliament – <a href="https://members.parliament.uk/member/3935/voting">including you</a> – have been either actively supporting or too frightened to vote against legislation which would make Putin proud. <a href="https://bills.parliament.uk/bills/2839/publications">Peaceful protest is now criminalised in the UK</a>. <a href="https://bills.parliament.uk/bills/3023/publications">British citizenship can be removed without notice</a>. Based on transparent lies about imaginary election fraud, <a href="https://bills.parliament.uk/bills/3020/publications">voter ID will now be needed</a>, which it is estimated will <a href="https://committees.parliament.uk/publications/8194/documents/83775/default/">reduce turnout by over a million</a>, with a <a href="http://theconversation.com/democracy-undermined-elections-in-the-uk-are-changing-heres-how-182251">convenient bias</a> towards those who would not vote for the johnsonite tory party if they could vote. And finally, <a href="https://bills.parliament.uk/bills/3020/publications">the Electoral Commission is now under the control of ministers</a>: people who, you know, might have just a tiny conflict of interest, don’t you think?</p>
<p>There is someone else who told lies about electoral fraud in order to keep himself in power, isn’t there? The same person who attempted a coup on January 2021. The same person who has turned his party into <a href="https://www.economist.com/briefing/2022/01/01/the-republicans-are-still-donald-trumps-party-and-they-can-still-win" title="The Republicans are still Donald Trump’s party, and they can still win / TheEconomist">an explicitly anti-democratic shell</a> for his own desire for personal power, a shell which, quite probably, will turn the US into an authoritarian state in two years time. He might remind you of someone closer to home, I think.</p>
<p>The UK is not there yet, but the <a href="https://www.penguin.co.uk/books/111/1114753/the-death-of-democracy/9781786090300.html" title="The deathof democracy">death of democracy</a><sup><a href="#2022-05-03-an-unsent-letter-to-mel-stride-mp-footnote-2-definition" name="2022-05-03-an-unsent-letter-to-mel-stride-mp-footnote-2-return">2</a></sup> is now clearly in sight. The johnsonite tory party must now be seen as an explicitly authoritarian party aiming to secure eternal power for itself at any cost. You belong to that party: I will leave it to you to decide what that means.</p>
<p>(And don’t insult me by claiming that ‘it can’t happen here because the UK is a democracy’: in 1933 Germany was also a democracy; in the late 1990s Russia was a democracy. And besides, if it can’t happen here <em>why are you voting for it</em>?)</p>
<p>It’s a strange situation, isn’t it? Perhaps the best hope for the UK is that johnsonite tory MPs will be so stupid and so incompetent (really, how stupid do you have to be to watch pornography in the house of commons? or to go to parties during lockdown given that phone cameras are a thing? pretty fucking stupid, I think), and so busy masturbating, sexually assaulting people and taking money from Russian oligarchs in return for favours, that they’ll eat themselves alive before they get around to installing the one-party state they so clearly desire. That’s not an attractive choice.</p>
<p>And in the meantime, the next time some MP whines about how hateful and abusive people are to MPs who are ‘just doing a very hard and difficult job as best they can’, then we’ll know what to say. You know what: you don’t have hard or difficult jobs, and most of you don’t even know what a hard and difficult job <em>is</em>. Doctors and nurses working in hospitals have hard and difficult jobs. During the pandemic (which means now, because lying about it does not make it over) they have <em>very</em> hard and difficult jobs. People in Ukraine have extremely hard, difficult jobs. Do you think doctors and nurses keeping people alive have time to watch porn while doing so? No, you don’t in fact have hard and difficult jobs: you have easy, undemanding jobs which involve sitting around, talking and drinking. Or in many cases sitting around, lying, drinking and abusing people. Here’s a clue for you: if your job allows you time for a second job then whatever it is, it’s not hard. So don’t expect any sympathy from the people you lord it over like childish tinfoil tyrants.</p>
<p>And the next time <a href="https://www.bbc.co.uk/news/uk-politics-61255056">some halfwit</a> gets up on his hind legs and says that</p>
<blockquote>
<p>the problem in the house of commons is ultimately the overall culture of long hours, bars and people sometimes under pressure and after all of that, that can create a toxic mix that leads to all sorts of things</p></blockquote>
<p>we’ll also know what to say. Here’s the thing: being tired and drunk all the time doesn’t make people into bigots and misogynists: it removes their inhibitions so they express the bigotry and misogynism they always felt. If you behave like a bigot and a misogynist when you are tired and drunk (because your pretend-hard job somehow requires you to get drunk a lot), than it’s not because you’re tired and drunk: <em>it’s because you are a bigot and a misogynist</em>.</p>
<p>I wish no-one ill: but if the sea were to rise up tomorrow and drown the houses of parliament and everyone in them, I would not weep.</p>
<hr />
<p><em>Thou carriest them away as with a flood; they are as a sleep: in the morning they are like grass which groweth up.</em></p>
<hr />
<div class="footnotes">
<ol>
<li id="2022-05-03-an-unsent-letter-to-mel-stride-mp-footnote-1-definition" class="footnote-definition">
<p>This was written on May day, 2022. Mel Stride is my MP: I did think about sending it but what purpose would it have served? In the unlikely case that he read it rather than one of his staff would it make him change his mind? Of course it would not. <a href="#2022-05-03-an-unsent-letter-to-mel-stride-mp-footnote-1-return">↩</a></p></li>
<li id="2022-05-03-an-unsent-letter-to-mel-stride-mp-footnote-2-definition" class="footnote-definition">
<p>A book you should read, along with <em><a href="https://en.m.wikipedia.org/wiki/How_Democracies_Die">How democracies die</a></em>. <a href="#2022-05-03-an-unsent-letter-to-mel-stride-mp-footnote-2-return">↩</a></p></li></ol></div>Avoiding circularity: a simple exampleurn:https-www-tfeb-org:-fragments-2022-03-23-avoiding-circularity-a-simple-example2022-03-23T17:54:40Z2022-03-23T17:54:40ZTim Bradshaw
<p>Here’s a simple example of dealing with a naturally circular function definition.</p>
<!-- more-->
<p>Common Lisp has a predicate called <a href="http://www.lispworks.com/documentation/HyperSpec/Body/f_everyc.htm"><code>some</code></a>. Here is what looks like a natural definition of a slightly more limited version of this predicate, which only works on lists, in Racket:</p>
<pre class="brush: racket"><code>(define (some? predicate . lists)
;; Just avoid the spread/nospread problem
(some*? predicate lists))
(define (some*? predicate lists)
(cond
[(null? lists)
;; if there are no elements the predicate is not true
#f]
[(some? null? lists)
;; if any of the lists is empty we've failed
#f]
[(apply predicate (map first lists))
;; The predicate is true on the first elements
#t]
[else
(some*? predicate (map rest lists))]))</code></pre>
<p>Well, that looks neat, right? Except it is very obviously doomed because <code>some*?</code> falls immediately into an infinite recursion.</p>
<p>Well, the trick to avoid this is to check whether the predicate is <code>null?</code> and handle that case explicitly:</p>
<pre class="brush: racket"><code>(define (some*? predicate lists)
(cond
[(null? lists)
;;
(error 'some? "need at least one list")]
[(eq? predicate null?)
;; Catch the circularity and defang it
(match lists
[(list (? list? l))
(cond
[(null? l)
#f]
[(null? (first l))
#t]
[else
(some? null? (rest l))])]
[_ (error 'some? "~S bogus for null?" lists)])]
[(some? null? lists)
;; if any of the lists is empty we've failed
#f]
[(apply predicate (map first lists))
;; The predicate is true on the first elements
#t]
[else
(some*? predicate (map rest lists))]))</code></pre>
<p>And this now works fine.</p>
<p>Of course this is a rather inefficient version of such a predicate, but it’s nice. Well, I think it is.</p>
<hr />
<p>Note: a previous version of this had an extremely broken version of <code>some*?</code> which worked, by coincidence, sometimes.</p>Two understandable deficiencies in Common Lispurn:https-www-tfeb-org:-fragments-2022-03-22-two-understandable-deficiencies-in-common-lisp2022-03-22T09:58:28Z2022-03-22T09:58:28ZTim Bradshaw
<p>Common Lisp is, I think, a remarkably pleasant language, despite what some people like to say. Here are two small deficiencies, both of which are understandable in terms of the history of CL, and both of which ultimately hurt naïve programmers working in CL.</p>
<!-- more-->
<h2 id="the-default-floating-point-type-is-single-float">The default floating-point type is <code>single-float</code></h2>
<p>There are two things that make this true:</p>
<ul>
<li><a href="http://www.lispworks.com/documentation/HyperSpec/Body/v_rd_def.htm"><code>*read-default-float-format*</code></a> is initially <code>single-float</code>, which means that, unless it is changed, <code>1.0</code> reads as <code>1.0f0</code>, a single float<sup><a href="#2022-03-22-two-understandable-deficiencies-in-common-lisp-footnote-1-definition" name="2022-03-22-two-understandable-deficiencies-in-common-lisp-footnote-1-return">1</a></sup>;</li>
<li>The <a href="http://www.lispworks.com/documentation/HyperSpec/Body/f_float.htm"><code>float</code></a> function will convert to a single float unless it is given a prototype which is not a single float: <code>(float 1)</code> is <code>1.0f0</code>, while to get a double float you would need <code>(float 1 1.0d0)</code>.</li></ul>
<p>In addition things like <a href="http://www.lispworks.com/documentation/HyperSpec/Body/m_w_std_.htm"><code>with-standard-io-syntax</code></a> bind <code>*read-default-float-format*</code> to <code>single-float</code>, so you have to do a little more work to make doubles the default.</p>
<p>I think there are probably several historical reasons why this default was chosen:</p>
<ul>
<li>a long time ago memory was very expensive and single floats take, usually, half the memory of double floats, thus pushing people towards single floats;</li>
<li>a long time ago, perhaps, on some machines, single float operations were significantly faster than double float operations even before possible float consing was taken into account;</li>
<li>Lisp hardware companies with significant influence on the standard, notably Symbolics, made hardware which allowed single (32 bit) floats to be immediate objects, while double floats were not, and had simple-minded compilers which were not capable of optimizing double float operations, thus making double float arithmetic extremely slow compared to single float arithmetic, and these companies wanted their machines to seem fast (they never, really, were) for naïve users;</li>
<li>it was not clear that implementations would choose <code>single-float</code> to mean ‘single precision IEEE 754 float’ and <code>double-float</code> to mean ‘double precision IEEE 754 float’, for instance it’s perfectly legal to have the <code>short-float</code> type mean single precision IEEE 754 and all of the <code>single-float</code>, <code>double-float</code> and <code>long-float</code> types mean double precision IEEE 754;</li>
<li>it wasn’t even even clear that <a href="https://en.wikipedia.org/wiki/IEEE_754-1985">IEEE 754</a> would come to dominate how machines implement floating-point: VAXes didn’t, and other machines of interest at the time also did not.</li></ul>
<p>So there are good historical reasons for this. However all implementations I’m aware of now translate <code>short-float</code> to mean <code>single-float</code>, <code>single-float</code> to mean IEEE 754 single precision, <code>double-float</code> to mean IEEE 754 double precision and <code>long-float</code> to be the same as <code>double-float</code>.</p>
<p>So what is the problem with the default float type being <code>single-float</code> in the modern world? The answer is</p>
<pre class="brush: lisp"><code>> (log (/ 1 single-float-epsilon) 10)
7.22472</code></pre>
<p>In other words, single precision IEEE 754 arithmetic has about 7 significant figures of precision. For many purposes, and <em>especially</em> for naïvely-written code that’s at best marginal and at worst less than that. On the other hand</p>
<pre class="brush: lisp"><code>> (log (/ 1 double-float-epsilon) 10)
15.954589770191001D0</code></pre>
<p>which is almost 16 significant figures of precision, more than twice that of single precision.</p>
<p>That’s why the default should have been double precision: it makes naïve code more likely to work, and people who are writing non-naïve code can use single precision if they need it.</p>
<h2 id="the-cl-user-package-is-defined-in-an-implementation-dependent-way">The <code>CL-USER</code> package is defined in an implementation-dependent way</h2>
<p>From <a href="http://www.lispworks.com/documentation/HyperSpec/Body/11_abb.htm">the spec</a>:</p>
<blockquote>
<p>The <code>COMMON-LISP-USER</code> package is the current package when a Common Lisp system starts up. This package uses the <code>COMMON-LISP</code> package. The <code>COMMON-LISP-USER</code> package has the nickname <code>CL-USER</code>. <em>The <code>COMMON-LISP-USER</code> package can have additional symbols interned within it; it can use other implementation-defined packages.</em></p></blockquote>
<p>(My emphasis.)</p>
<p>What this means is that when you start a CL environment, the current package may have all sorts of implementation-dependent symbols visible in it. You can see why this happened: if you’re implementing Super-Whizz-Bang CL which has all sorts of magic extra features, you want at least some of those features to be immediately available to users, rather than requiring them to pore over boring manuals to find them.</p>
<p>But for users, and especially for naïve users, it’s a terrible choice: naïve users don’t know about packages so they write their programs in <code>CL-USER</code>. And they also don’t really know which symbols available in <code>CL-USER</code> come from <code>CL</code> and are thus standard parts of the language, and which come from one of Super-Whizz-Bang CL’s implementation packages, and are <em>not</em> standard parts of the language. So their programs turn into a mess where the portable parts are not distinct from the non-portable parts. The way the <code>CL-USER</code> package is defined thus makes it harder for to write programs whose non-portable parts are well-isolated, and ultimately hurts the language.</p>
<p>This is a direct conflict between implementors and users: implementors both want their extra features immediately available so their implementation is shinier and want to encourage users to use these extra features in a way which makes it hard to move their programs to other implementations; users, when they think about it, generally don’t want this second thing, at least.</p>
<p>Instead, the language should have defined <code>CL-USER</code> as a package which <em>only</em> used <code>CL</code>, and perhaps have defined another standard package, perhaps <code>IMPL-USER</code>, which was defined the way <code>CL-USER</code> is today.</p>
<h2 id="can-these-be-fixed">Can these be fixed?</h2>
<p>While both of these problems could be fixed without changing the standard, I don’t think either can <em>realistically</em> be fixed.</p>
<p>For the <code>single-float</code> problem there is nothing to stop implementations simply defining <code>short-float</code> to mean IEEE 754 single precision and all the other types to mean IEEE 754 double precision. But all the existing code which assumes otherwise will then probably break in exciting ways. So this is unlikely to happen I expect.</p>
<p>The <code>CL-USER</code> problem could be fixed if implementations agree to define <code>CL-USER</code> to use only <code>CL</code> as it is allowed to do, and perhaps to define an <code>IMPL-USER</code> package as above. Of course that will make implementations slightly less convenient to use, so the chances of it happening would be small, even if implementors actually talked to each other in any useful way which I suspect they no longer do. Worse than that, this change will break many programs written by naïve users which live in <code>CL-USER</code>, and there are almost certainly lots of those.</p>
<hr />
<p>A moment of convenience, a lifetime of regret, as the old saying goes.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2022-03-22-two-understandable-deficiencies-in-common-lisp-footnote-1-definition" class="footnote-definition">
<p>An earlier version of this article had single floats written as, for instance <code>1.0s0</code>: that’s wrong, those are <em>short</em> floats, single floats are <code>1.0f0</code> for instance. These are almost certainly the same type on any current implementation (and I think on any implementation I have ever used, hence the mistake) but they don’t have to be. Thanks to Prem Nirved for finding this stupidity. <a href="#2022-03-22-two-understandable-deficiencies-in-common-lisp-footnote-1-return">↩</a></p></li></ol></div>What if Putin is rational?urn:https-www-tfeb-org:-fragments-2022-03-07-what-if-putin-is-rational2022-03-07T14:19:52Z2022-03-07T14:19:52ZTim Bradshaw
<p>Putin’s invasion of Ukraine is horrifying. As well as the awfulness of what is happening to the people of Ukraine, Putin’s apparent irrationality is terrifying. What if he is not being irrational?</p>
<!-- more-->
<p>A strategy that the Russians have used against both Napoleon and the Nazis is to retreat further than anyone thinks is possible, accept more losses than anyone thinks is possible and then wait for the winter to do its work. Perhaps Putin is using a variant of this strategy. In particular perhaps he simply does not care whether he wins in Ukraine because that’s not what he’s trying to do: he’s not trying to annexe Ukraine, he’s trying to destroy the west.</p>
<p>The invasion of Ukraine will have significant economic repercussions:</p>
<ul>
<li>sanctions on Russia will have very severe repercussions for Russia, but they will also have economic repercussions on the west which will be at least fairly severe;</li>
<li>oil and gas prices will go up significantly and the west is not anywhere near in a position to escape from fossil fuel dependency;</li>
<li>a lot of wheat is grown in Ukraine, and the harvest is not going to be very good this year, which will push up wheat prices and hence food prices.</li></ul>
<p>So there will be catastrophic economic effects in Russia, but also severe to very severe effects in Europe and the west more generally.</p>
<p>In addition the attack is causing an enormous refugee crisis: as I write this <a href="https://www.bbc.co.uk/news/world-60555472">more than 1.7 million Ukrainians have left Ukraine</a>. I don’t know how many might eventually be driven out of course, but it might be of the order of 10 million people. Essentially all of those people are going to be driven west, into central and western Europe. This will be an enormous humanitarian crisis in Europe: bigger than anything seen since the second world war.</p>
<p>Well, in 2007–2008 there was a very significant economic crisis, and from 2011 to now there has been a civil war in Syria which has caused a refugee crisis. And I don’t think it’s controversial to say that one of the results of this was populism, authoritarianism and large-scale bigotry. These crises gave us Trump, Bolsonaro, Orban and the johnsonites, and they gave us brexit and other disasters.</p>
<p>So what is this crisis going to give us? More of the same, almost certainly. Everyone loves the Ukrainians now, but when there are 10 million of them trying to find a way of living in central and western Europe a lot of people are going to like them a lot less: there is going to be a lot – a lot – of anti-Ukrainian bigotry. And while this is happening, food and fuel will be becoming far more expensive: almost everyone will be poorer, and in particular poor people, for whom food and fuel is a larger proportion of their spending.</p>
<p>And in the background, climate change will be doing its inevitable work: weather-related damage will be increasing, harvests will be poorer and refugees from areas becoming dramatically less habitable will be arriving in ever greater numbers. And people like me will be saying that we must therefore reduce our dependency on fossil fuels rapidly if we want to have a long-term future. But decades of denying the problem exists will mean we can’t do that, and the populist demagogues given power by the crisis in Ukraine will say we don’t need to do that anyway. And so things will get worse, and they will get worse faster than they were before the crisis. And we’ll get more populists, more authoritarianisms and more democracies will fail.</p>
<p>But populism and authoritarianism <em>don’t work</em>: populism seeks to provide simple, appealing, answers (‘send the nasty foreigners home’) to complex, unappealing, problems (‘how do we deal with climate change?’), and those answers are <em>wrong</em>; authoritarianism doesn’t even pretend to look for answers because the answer to all questions is ‘do as your told or you will be killed’. Both systems make a few people better off but almost everyone poorer. So once liberal democracies get replaced by populist or authoritarian regimes thing almost always get <em>worse</em>, and the forces which gave those regimes power become stronger: ‘if sending the foreigners home didn’t work, perhaps we should just kill them?’. And things thus get worse, and they get worse ever faster.</p>
<p>Thought of as a physical system, liberal democracies are not necessarily <em>stable</em>: they tend to fall off their plateau into nasty regimes of various kinds, which in turn cause things to move further down from the plateau, and so on. As has been obvious in the last few years, a lot of work is needed all the time to defend them. Even fairly small external kicks of various kinds can destabilise them, and there will always be people working within them to do the same thing.</p>
<p>So, perhaps Putin’s ongoing rape of Ukraine is not an irrational attack based on some toxic nostalgia for the USSR: perhaps it is an entirely rational attempt to do something else. Perhaps he does not really care what happens to Ukraine because that’s not what he’s interested in. Perhaps he is using the strategy that has worked for Russia before: accepting more damage than anyone thinks Russia can in order to cause, in the liberal democracies of Europe, lesser but still very significant economic damage along with with a vast humanitarian criss, with the aim of causing them to collapse. After all, he is not himself affected by the economic damage being done to Russia: it is only ordinary Russians who will starve. And he does not care about them.</p>
<p>Well, I hope I am wrong: I usually am wrong.</p>Vector supercomputersurn:https-www-tfeb-org:-fragments-2021-12-30-vector-supercomputers2021-12-30T12:20:51Z2021-12-30T12:20:51ZTim Bradshaw
<p>There are apocryphal reports that Apple M1 systems are not as fast as people have been led to believe for general-purpose programs. That’s unsurprising.</p>
<!-- more-->
<p>I think what’s happened is that vector supercomputers have secretly won, and with them come all their performance weirdnesses that make a lot of code really suck: no-one wanted to run anything other than rather specialised programs on a Cray 1 or any of its descendants because it was just not very fast for that. Vector supercomputers were great at numerical loops over large arrays, but they were absolutely terrible at code which had to make lots of actual decisions.</p>
<p>So now we’re seeing machines which are optimised to be extremely good at mashing arrays of numbers, and much less good at general computation. Of course, unlike the 1970s & 80s machines ‘much less good’ is ‘quite good enough’ in almost all cases.</p>
<p>And they’ve won, really, because we’re in the middle of another AI hype-cycle: the last hype cycle gave us all sorts of weird hardware like Lisp machines, graph-reduction machines and so on: this one, which is built, really, on programs which ought to be written in Fortran, is giving us special-purpose array-mashing machines — vector supercomputers, in other words — which are really good at all the annoying machine-learning things our computers now insist on foisting on us.</p>
<p>Well, this AI hype cycle will be like all the other AI hype cycles: despite the idiot boosters who have conveniently forgotten what happened last time and all the times before that, we are not anywhere near some kind of strong AI based on machine learning. Already you can see this: whatever language-learning system we’re all meant to worship at the feet of has now been trained on <em>all the natural language that exists on the internet</em>, in order to produce results which are not, in fact, acceptable. And there’s nowhere to go from here: there is no more training data.</p>
<p>It remains to be seen whether array-mashing machines outlive the hype that gave rise to them: there are good uses for systems like this, just as there are good uses for machine learning, but when the bubble bursts it may yet take them with it.</p>The way outurn:https-www-tfeb-org:-fragments-2021-12-04-the-way-out2021-12-04T13:07:57Z2021-12-04T13:07:57ZTim Bradshaw
<p>Many people would like to believe that the CV19 pandemic is over. Unfortunately viruses do not listen to what people want to believe: the CV19 pandemic is not over, and there is a significant possibility it may <em>never</em> be over. The way out is not to pretend that it is.</p>
<!-- more-->
<h2 id="cv19-is-not-over">CV19 is not over</h2>
<p>Unless CV19 can be <em>globally</em> eliminated it will not be over: new cases will leak into countries however hard they try to prevent it. Eliminating CV19 globally requires achieving herd immunity through vaccination or infections <em>everywhere</em>.</p>
<p>We are not particularly close to that. It’s tempting to do a lot of calculations at this point to show this but those calculations are fiddly and I always get them wrong. Instead consider the UK: currently (3rd December 2021) <a href="https://ourworldindata.org/explorers/coronavirus-data-explorer?zoomToSelection=true&facet=none&uniformYAxis=0&pickerSort=asc&pickerMetric=location&Metric=People+vaccinated+%28by+dose%29&Interval=Cumulative&Relative+to+Population=true&Align+outbreaks=false&country=GBR~DEU~Europe~USA~FRA" title="Our World in Data">about 75% of the population have had at least one dose</a> and there is a current effort to roll out third, ‘booster’, doses. Yet, even before the omicron variant, CV19 was nowhere near gone in the UK. And the UK is doing reasonably well at vaccinations. Boris Johnson’s lies notwithstanding it is <a href="https://ourworldindata.org/explorers/coronavirus-data-explorer?zoomToSelection=true&time=2021-12-03&facet=none&pickerSort=asc&pickerMetric=location&Metric=People+fully+vaccinated&Interval=7-day+rolling+average&Relative+to+Population=true&Align+outbreaks=false&country=ARE~PRT~CUB~CHL~ESP~SGP~KHM~URY~KOR~CAN~CHN~IND~USA~IDN~PAK~BRA~NGA~BGD~RUS~MEX~JPN~ETH~PHL~EGY~VNM~TUR~IRN~DEU~THA~GBR~FRA~TZA~ITA~ZAF~KEN~OWID_WRL" title="Proportion of people fully vaccinated, 3rd December 2021">nowhere near</a> <a href="https://ourworldindata.org/explorers/coronavirus-data-explorer?zoomToSelection=true&time=2021-12-03&facet=none&pickerSort=asc&pickerMetric=location&Metric=People+vaccinated+(by+dose)&Interval=7-day+rolling+average&Relative+to+Population=true&Align+outbreaks=false&country=ARE~PRT~CUB~CHL~ESP~SGP~KHM~URY~KOR~CAN~CHN~IND~USA~IDN~PAK~BRA~NGA~BGD~RUS~MEX~JPN~ETH~PHL~EGY~VNM~TUR~IRN~DEU~THA~GBR~FRA~TZA~ITA~ZAF~KEN~OWID_WRL" title="Proportion of people vaccinated by dose, 3rd December 2021">the top of the table</a>, but it is well above the global average: globally about 55% of people have had at least one dose.</p>
<p>Unfortunately, by hoarding both vaccines and the rights to manufacture them, the rich countries are actively hurting the effort to globally eliminate CV19. So they are undermining the efforts they are making to protect their own populations. It is, apparently, too hard for politicians to understand that the virus cares much less about some lines drawn on a map than they do.</p>
<p>Worse than this, until CV19 is eliminated, it will still continue to evolve new variants, which will spread if they are fit. If we assume that CV19 won’t be eliminated soon, what’s likely to happen, in rich countries like the UK, as new variants appear?</p>
<p>The remainder of this essay concentrates on the UK as far as human responses go: that is obviously parochial, but the UK is where I live and I am most familiar with what the responses are there<sup><a href="#2021-12-04-the-way-out-footnote-1-definition" name="2021-12-04-the-way-out-footnote-1-return">1</a></sup>.</p>
<h2 id="what-might-the-virus-do">What might the virus do?</h2>
<p>There are two important choices, which are orthogonal to each other: will a much-less-deadly variant become dominant? and will a variant which escapes the current vaccines become dominant? Neither of these cases excludes anything happening in the future: for instance a mild variant which does not escape vaccines could become dominant this year, only for a serious variant which does escape the vaccines to become dominant in a few years if such a variant has a selective advantage. As long as the virus exists it will be endlessly trying new variants.</p>
<p>Two independent choices gives a total of four scenarios.</p>
<h3 id="a-mild-variant-flu">A mild variant: ‘flu’</h3>
<p>Two scenarios involve a mild variant which may, or may not, escape the current vaccines: these are the ‘flu’ scenarios. Such a variant might become seasonal in the way flu is (CV19 may well already <em>be</em> seasonal of course: we just haven’t lived through enough seasons yet). If the current vaccines don’t work for this variant the results will not be too severe, and new vaccines will be developed. If vaccines, new or existing, confer long-term immunity, then things would become relatively normal. It’s likely that they don’t, however, so we would probably require regular courses of vaccinations, which may be new vaccines as the virus evolves: this is still pretty close to normal life.</p>
<p>Well, we could live with that, the way we live with flu. Except that we don’t always live with flu: Spanish flu<sup><a href="#2021-12-04-the-way-out-footnote-2-definition" name="2021-12-04-the-way-out-footnote-2-return">2</a></sup> killed between 17 and 50 million people, and perhaps as many as 100 million (by comparison, The Economist thinks that CV19 <a href="https://www.economist.com/graphic-detail/coronavirus-excess-deaths-estimates" title="The pandemic’s true death toll">has probably killed about 17 million people so far</a> from a larger population). Again, as long as CV19 exists it will be developing new variants and there is nothing I can see to stop one arising which is like that.</p>
<p>But, perhaps, this is at least no <em>worse</em> than flu is now, except perhaps that more people will need to be vaccinated more often.</p>
<h3 id="no-vaccine-escape">No vaccine escape</h3>
<p>A third scenario is that CV19 stays roughly as deadly as it is now, but the vaccines we have keep working against it, probably with regular courses required. People continue to die in significant numbers, with those numbers depending to a great extent on the precautions people are willing to accept. This may seem much like the flu case except that a lot more people die. It might be only that: there are unfortunately much nastier possibilities discussed below.</p>
<h3 id="vaccine-escape">Vaccine escape</h3>
<p>The final scenario is that a variant arises which is significantly deadly and for which the current vaccines are not effective. This is year zero: new vaccines will need to be developed, and a new urgent vaccination programme will be required. Until the vaccination programme is well under way there will either need to be very significant restrictions on social contact if very high death tolls are to be avoided. Unfortunately very high death tolls during the vaccine development and the early stages of the vaccination programme are, again, far from the worst things that could happen.</p>
<h3 id="other-scenarios">Other scenarios</h3>
<p>There are other possibilities. A variant might arise which is much more deadly, for instance. It’s easy to argue that very deadly viruses are selected against: a virus which kills too many of its hosts will tend not to thrive in competition with less lethal one. But in real life things are not that simple: the black death <a href="https://en.wikipedia.org/wiki/Black_Death" title="The Black Death">killed between 75 and 200 million people in 7 years</a>, killing between 30% and 60% of the population of Europe, and perhaps 25% of the world’s population. <em>Yersinia pestis</em> was, perhaps, not competing with less deadly versions of itself, wasn’t so subject to mutation as a virus would be and there were other factors, but still: very bad things can happen.</p>
<p>A very nasty possibility is that a variant will arise against which <em>no</em> vaccines work very well. Before the current vaccines were developed some people were suggesting this (search for ‘there has never been a successful vaccine against a coronavirus’: I am not going to link to any of the results because some of them are awful people who do not deserve anyone’s attention). It seems to me that this is absurdly unlikely, but I’m not an expert.</p>
<p>No doubt there are many other scenarios I have not thought about.</p>
<h3 id="an-endless-war">An endless war</h3>
<p>Once again: until the CV19 virus is globally eliminated it will continue to evolve new variants. None of the above scenarios excludes any of the others: CV19 will explore as much of the space of options as it can. Until it is eliminated the pandemic will not be over: if it is never eliminated the pandemic will <em>never</em> be over.</p>
<h2 id="what-might-the-humans-do">What might the humans do?</h2>
<p>The virus is only one of the players in this game: the other is us. What happens depends on our response as much as what the virus does.</p>
<h3 id="normality">Normality</h3>
<p>If the virus is eliminated then normal pre-pandemic life resumes. In either of the ‘flu’ scenarios something quite like normal life resumes. In the flu scenarios normal life only resumes so long as the virus doesn’t evolve some much nastier variant. So, rationally, in these scenarios, work should still continue to eliminate the virus globally as fast as possible. I think it’s safe to say that won’t happen, so the normality in these scenarios is almost certainly impermanent.</p>
<h3 id="stability">Stability</h3>
<p>If there is a deadly variant which does not escape the vaccines then it’s possible to imagine a stable scenario where some combination of restrictions on social contact, regular vaccinations, masks, and just accepting that a fair number more people die each year than before CV19 will keep things under control. That’s a nice dream, anyway.</p>
<h3 id="instability">Instability</h3>
<p>A variant which escapes the vaccines results in instability: it is essentially a whole new pandemic and rapid lockdowns will be needed while new vaccines are developed to avoid very high death tolls or even worse outcomes. The instability can be minimised by careful management but the chances of that seem low.</p>
<p>Unfortunately I think that, even for variants that do not escape current vaccines, stability is unlikely. Instead there will be some more-or-less chaotic cycle of too-much relaxation followed by panics as death rates rise. None of this is helped by politicians who, in the UK at least, do not care very much if many people die, are not competent to understand what is required for stability, and neither understand nor care about the consequences of serious instability.</p>
<h2 id="what-is-really-happening">What is really happening?</h2>
<p>We’re still in the early stages of the pandemic, but what has actually happened?</p>
<h3 id="the-virus">The virus</h3>
<p>I don’t know. Since I started writing the <a href="https://en.wikipedia.org/wiki/SARS-CoV-2_Omicron_variant" title="Omicron variant">omicron variant</a> has appeared: this has a very high number of mutations, 62, compared to the original virus, of which 32 affect the spike protein which vaccines target (<a href="https://en.wikipedia.org/wiki/SARS-CoV-2_Delta_variant" title="Delta variant">delta</a> had 8 or 9). Currently (5th December 2021) it’s not known how infectious it is or how severe the illness it causes is compared to the previous, delta, variant. More seriously it’s not known how well vaccines work for it, but with a very large number of mutations on the spike protein people are clearly pretty worried.</p>
<p>So the omicron variant might be a ‘flu’ variant, a vaccine escape variant, both, neither, or something else. It might also not be very interesting at all. But there will be more variants in an effectively endless succession<sup><a href="#2021-12-04-the-way-out-footnote-3-definition" name="2021-12-04-the-way-out-footnote-3-return">3</a></sup>: sooner or later something interesting <em>will</em> arise. Given selective pressure on the virus it will probably be sooner: if omicron is not it we are not off the hook.</p>
<h3 id="the-humans">The humans</h3>
<p>What are the humans doing? In particular what is the UK government doing? As a populist government what it does is to offer simple, appealing, wrong answers to complex, unappealing problems. It gives the answers that people would like it to give, without ever trying to explain why those answers are wrong or what the consequences of believing them will be. The johnsonites are very far from democratic, quite the opposite in fact, but in this case we can treat them as avatars of what people would like to be told is true.</p>
<p>Well, until a few days ago the answer was that they were saying that the pandemic was over and that normal life could resume. Now there is incoherent messaging: scientific and medical advisers are advising caution in the face of omicron, while Boris Johnson is publicly ignoring them. Johnson, clearly, is too stupid to understand the consequences of what he is doing and would not care about them if he did. He also has demonstrated, publicly and repeatedly, that he believes <a href="https://www.bbc.co.uk/news/uk-politics-59491568" title="Christmas parties">rules do not apply to him</a>, thus ensuring that no-one else obeys them either. More competent (perhaps merely less incompetent) members of the government are giving more cautious messages, but there is simply no coherent strategy and the government is very obviously no longer ‘following the science’ nor even pretending to do so.</p>
<p>This is a recipe for instability: if omicron is serious then Johnson’s strategy, if it can be called such, will maximise its impact early in the new year. Johnson shows no sign of having learnt anything at all from his earlier mistakes: if anything he’s learnt that he can, in fact, get away with murder. If lessons are not learnt from this cycle the best we can hope for is a continuing chaotic cycle of restrictions followed by relaxation, for years.</p>
<p>The worst is that Johnson continues to maintain that it is all over, while people die around him in huge numbers. This is stability, of a kind, but not one anyone should wish for. Sadly many people do seem to wish for it, and to be happy with enormous numbers of deaths so that they don’t have to experience the momentary inconvenience of wearing a mask or otherwise behaving safely.</p>
<h2 id="collapse">Collapse</h2>
<p>That is, in fact, not the worst outcome. Because we live in a society built on complex systems which took a long time to assemble and which, if they are stressed to the point of collapse, can not then be reassembled quickly, if at all.</p>
<p>In 2008 the global financial system came close to collapse. Many people said at the time, and probably still do say, that the banks should just have been allowed to fail. Those people were fools. Banks are close to the archetypal complex system which, if it collapses, can not quickly be repaired if it can be repaired at all. If the banking system had been allowed to fail in 2008 then essentially money would have ceased to exist: ATMs would have stopped working, salaries would have stopped being paid, everything involving money would have stopped. And once that had happened it would have taken years to restart. Pretty quickly people would have started getting hungry, there would have been riots and far worse. And this would have gone on for years. Huge numbers would have died. The 2008 financial crisis was a nasty experience, but it was <em>vastly</em> less nasty than what was narrowly avoided.</p>
<p>There is another such complex system: the health service. The NHS is one of the great achievements of post-war Britain: I think the greatest in fact. If the NHS is pushed too hard it, too, will collapse, and if it collapses, bad things will happen. And CV19 is pushing the NHS very hard indeed. Already many people are dying of things which would have been treated if not for CV19, and people who are not dying are sitting in lengthening backlogs which will take years or decades to clear. And this is only the start of what could happen. People who are working in ICUs will eventually become burnt out: they’ll end up shell-shocked and unable to work. And so the number of staff in ICUs will decline just as the requirement for them increases. That’s a death spiral: more and more people will get burnt out as their workload increases because their colleagues have already become burnt out. These people will themselves then need care, which is already very limited. Without care they may never return to work. And it takes quite a long time to train someone to work in an ICU: it is not an easy job. And, having witnessed what happens to people who work in ICUs, who is going to apply to be trained?</p>
<p>So the likely end result of a series of chaotic cycles of relaxation and panic is that the NHS will collapse in due course. And the likely end result of simply accepting large ongoing death rates, of Johnson’s stability through suffering, is that the NHS will collapse rather soon.</p>
<p>And if the NHS collapses it can’t be put back together quickly. Perhaps it can’t be put back together at all. And very large numbers of people will then die.</p>
<p>Avoiding collapse of the NHS is critical, but the UK government shows no sign of being competent to do so, or in fact of caring if the NHS collapses.</p>
<h2 id="the-way-out">The way out</h2>
<p>The way out is to eliminate the virus, globally. Until we can do that the best we can hope for is that it becomes like flu and that no nastier variant arises. I can’t see any reason why a nastier variant should not arise from a flu-like variant: nastier variants of <em>flu</em> arise, after all. So a flu-like stage, though very desirable short of elimination, is probably only temporary: something nastier will come back.</p>
<p>Managing the presumed nastier variants is hard. Inevitably there will be cycles of restriction and relaxation. Those cycles will have inevitable economic impact. That is not something that can be wished away.</p>
<p>The UK government, as with populists everywhere, has been hugely incompetent at managing the first few cycles, and shows no sign of becoming more competent. Johnson is stupid, uncaring and believes rules do not apply to him: while he controls the UK government there is little hope. Johnson, perhaps, believes that he can declare the pandemic over and it will be over. But the virus does not care what he thinks.</p>
<p>Almost certainly, absent a competent government, the UK will therefore experience a series of chaotic cycles of relaxation and restriction culminating in the probable collapse of the NHS and all that implies.</p>
<p>A competent government would understand that until the virus is eliminated the old world is simply gone: the world has now changed. Working from home is here for good, with all that implies for cities. Masks are here for good. A competent government would work to educate its people about this. And it would understand that since elimination will take years at least some of these changes must be regarded as permanent: in five years or a decade no-one will want to go back to spending three hours a day on a packed train or in a traffic jam.</p>
<p>The world has changed, and it has changed irrevocably, one way or another. The way out is to accept that there is no way back.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2021-12-04-the-way-out-footnote-1-definition" class="footnote-definition">
<p>Disclaimer: I’m not an expert and I’m not even pretending to be one on the internet. I’m just trying to understand things as best I can, then writing the down so I can see how wrong I was, later. <a href="#2021-12-04-the-way-out-footnote-1-return">↩</a></p></li>
<li id="2021-12-04-the-way-out-footnote-2-definition" class="footnote-definition">
<p>Which, of course, was not Spanish. <a href="#2021-12-04-the-way-out-footnote-2-return">↩</a></p></li>
<li id="2021-12-04-the-way-out-footnote-3-definition" class="footnote-definition">
<p><a href="https://en.wikipedia.org/wiki/Severe_acute_respiratory_syndrome_coronavirus_2" title="SARS-CoV-2">SARS-CoV–2</a> has a genome about 30,000 bases long: there are \(4^{30000}\) such genomes. Only a tiny proportion of those will encode anything interesting and only a tiny proportion of <em>those</em> will encode anything very like SARS-CoV–2, but that is still a lot of variants. <a href="#2021-12-04-the-way-out-footnote-3-return">↩</a></p></li></ol></div>The way downurn:https-www-tfeb-org:-fragments-2021-11-30-the-way-down2021-11-30T16:12:38Z2021-11-30T16:12:38ZTim Bradshaw
<p>The idiot child god and where he will lead us.</p>
<!-- more-->
<h2 id="a-five-year-old-boy">A five-year-old boy</h2>
<p>Something I’ve recently realised is that to understand the johnsonites you need only to understand that Boris Johnson is a rather dim, extremely spoilt, five-year-old boy from an extremely privileged background. He may look like an adult but he’s not: he’s a small child wearing <a href="https://en.m.wikipedia.org/wiki/Buffalo_Bill_(character)" title="It rubs the lotion on its skin or else it gets the hose again">an adult suit</a>.</p>
<p>I remember being five, more-or-less. When I was five I had no notion that other people were really people: I thought that I was perfect, that I was infallible, that I was a little god who one day would grow into a far greater god. I thought the world, and my parents who made up most of it, existed to serve me; that my siblings and the people at school were lesser beings who also, in due course would exist to serve me as I grew into my godhood. I thought, in other words, what all five-year-olds think. I remember, vividly, the time during which I realised that this was not true: that I was just a person like all the other people, that I wasn’t as good or as clever or as handsome as some of them, perhaps as most of them.</p>
<p>I think this happens to almost all people: almost every small child thinks that they are a tiny god and that the world is built around them. At some age larger children realise that this is not true and this is a huge shock to them. Except, for some few people it never is: some people simply never realise that they are not god. Johnson is such a person: he has, quite simply, never realised that the world, and everything in it, exists for any purpose other than to serve him. He was not helped, of course, by growing up to a life of extreme privilege: much of his world did indeed seem as if it existed to keep him and his class in their state of comfort and idleness.</p>
<p>If Johnson was clever rather than merely glib he would be absolutely terrifying: he is pretty terrifying as he is, but you really do not want clever people who think they are gods anywhere near you<sup><a href="#2021-11-30-the-way-down-footnote-1-definition" name="2021-11-30-the-way-down-footnote-1-return">1</a></sup>.</p>
<p>Johnson’s purpose in life is to <em>maximise Johnson</em>: everything, for him, exists only to further his own ascension to godhood and nothing must interfere with that. Nothing must ever be allowed make him feel bad about himself or question his own judgement – as an incipient god he is, of course, infallible and no questions must ever be asked or, if they should be asked then the questioners must be derided as naysayers, disruptive influences or worse.</p>
<h2 id="the-johnsonite-revolution">The johnsonite revolution</h2>
<p>So Johnson’s purpose is Johnson: more money for Johnson, more glory for Johnson, more acolytes for Johnson, more power for Johnson, more sex for Johnson<sup><a href="#2021-11-30-the-way-down-footnote-2-definition" name="2021-11-30-the-way-down-footnote-2-return">2</a></sup>, more children for Johnson. That is all he cares about: nothing else matters, at all.</p>
<p>Sadly for Johnson and us all he is not actually very good at anything. Achieving maximum Johnson is hard when the only Johnson you have available is, frankly, second-rate. Like his idol Churchill he wants to be a great writer of history but he is very far from being that. He can write witty articles full of subtle bigotry and offence, but his talent, such as it is, is no more than any number of other journalists and far less than the best of them<sup><a href="#2021-11-30-the-way-down-footnote-3-definition" name="2021-11-30-the-way-down-footnote-3-return">3</a></sup>.</p>
<p>In 2015 he must have wondered how he was to bargain this base johnsonite ore into the gold of godhead which he never doubted he deserved. We know the answer he came to: brexit. Brexit was never a good idea, and a botched brexit was likely to do very serious damage to the country. But it also might give him power, which was far more important: Johnson was happy to sacrifice his country without a thought.</p>
<p>And it worked: brexit did give him power. And the cost of that power for us all was altogether predictable but terrible nonetheless. Enthusiasm for brexit, with <a href="https://en.m.wikipedia.org/wiki/Douglas_Carswell" title="Douglas Carswell">perhaps a few exceptions</a>, is not normally associated with great intellect among politicians but Johnson, believing himself now a very god, could tolerate no dissent, no questioning, any more than any other spoilt five-year old boy could. And so he purged the parliamentary Conservative party of anyone who might doubt him, of anyone who might be cleverer than he was: constructing a government of the inadequate, a government of incompetents, ideologues and the dull-witted. Johnson has laid his eggs in the Conservative party like a parasitoid wasp, and now this new johnsonite party is growing in its body, consuming it from within while it still lives.</p>
<p>This is the maximum Johnson revolution.</p>
<h2 id="like-saturn">Like Saturn</h2>
<p>Rule by spoilt five-year-old boy was never going to end well. And it is not ending so well, is it? A child, seeking personal power and glory at any cost, does not make decisions which are good for anyone but himself. And as he <em>is</em> a child he doesn’t even make decisions which are good for himself in the long run: there is a reason why parents have authority over their children, it turns out. Being unable to be wrong means that he can never correct errors: he can never learn from his mistakes since he believes himself incapable of them. Surrounding himself only with people who are unwilling or unable to challenge him makes this worse.</p>
<p>Brexit was always going to make the UK poorer and weaker, and was always going to imperil the UK’s relations with its much larger and more powerful neighbour. The Northern Ireland situation probably had no really good solution. But Johnson hasn’t even tried: he, or his stooge David Frost, negotiated a minimal deal which, less than a year later, they are going back on, demonstrating in the most public possible way that they have either acted in bad faith throughout or were simply not competent to understand the implications of what they were doing<sup><a href="#2021-11-30-the-way-down-footnote-4-definition" name="2021-11-30-the-way-down-footnote-4-return">4</a></sup>.</p>
<p>But Johnson can never be wrong, so the catastrophes of the brexit he chose will always be the fault of other people.</p>
<p>And then of course the world throws something unexpected at him, in the form of CV19<sup><a href="#2021-11-30-the-way-down-footnote-5-definition" name="2021-11-30-the-way-down-footnote-5-return">5</a></sup>. Something he is even more utterly incompetent to deal with than the fallout from the brexit he engineered. It is hard to know how many people he has now killed due to his lack both of competence and of care, but it is safe to say that it is tens of thousands. And it is not over: the omicron variant may escape immunity through vaccines or previous infection<sup><a href="#2021-11-30-the-way-down-footnote-6-definition" name="2021-11-30-the-way-down-footnote-6-return">6</a></sup>, in which case, if it is as deadly as previous variants, we are starting again.</p>
<p>And Johnson can never be wrong, and Johnson can never learn so all the mistakes must have been made by other people. And he will make exactly the mistakes he made before, and the corpses will pile up in his wake. And these mistakes, too, will be someone else’s fault.</p>
<p>Like Saturn, Johnson’s revolution is eating its own children<sup><a href="#2021-11-30-the-way-down-footnote-7-definition" name="2021-11-30-the-way-down-footnote-7-return">7</a></sup>.</p>
<h2 id="the-way-down">The way down</h2>
<p>Where do we go from here?</p>
<p>Johnson is vastly incompetent but can never be wrong. If he is not removed by some kind of coup within the tory party<sup><a href="#2021-11-30-the-way-down-footnote-8-definition" name="2021-11-30-the-way-down-footnote-8-return">8</a></sup> then he will continue to lead us on the way down: the only way he knows. As disaster follows disaster, he must find endless new people to blame. So when the Northern Ireland agreement turns out not to work very well, somehow this is the fault of the EU, and the EU is duly demonised. So he will publish a rather stupid letter he wrote to the French president, causing the French to react, he hopes, badly. So now he can blame the French for the invented refugee crisis. So he will blame ‘remoaners’ who, somehow, are to blame for the ills of brexit. So he will blame the judges for getting in the way of his idiot brexit. So someone will be found to blame for the mounds of the CV19 dead. And so it goes on, for ever, with Johnson and his acolytes finding ever new groups to blame, waving their idiot flags and working their supporters into an ever stronger frenzy of resentment and hatred.</p>
<p>This strategy of finding identifiable groups to blame for your mistakes is familiar because it has happened before. It is the strategy of authoritarians, both those we call fascists and those we call communists, everywhere and always. And it ends with camps, pogroms and death. Not yet, not even soon, and not yet inevitably, but we are on the way.</p>
<p>It’s not dark yet, but it’s getting there.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2021-11-30-the-way-down-footnote-1-definition" class="footnote-definition">
<p>No-one would want to be very near Elon Musk. <a href="#2021-11-30-the-way-down-footnote-1-return">↩</a></p></li>
<li id="2021-11-30-the-way-down-footnote-2-definition" class="footnote-definition">
<p>Although his mentality is that of a privileged five-year old boy, his body is not: like most physically-adult people he wants sex. And he behaves exactly the way you would expect: he has limited or no self control. Who knows how many children he has by how many different partners: perhaps not even he does. Who knows how many partners he has had and how many he has been unfaithful to? <a href="#2021-11-30-the-way-down-footnote-2-return">↩</a></p></li>
<li id="2021-11-30-the-way-down-footnote-3-definition" class="footnote-definition">
<p>I freely admit my talent for writing is very slight. But I do freely admit it: something Johnson could never do. <a href="#2021-11-30-the-way-down-footnote-3-return">↩</a></p></li>
<li id="2021-11-30-the-way-down-footnote-4-definition" class="footnote-definition">
<p>Or, of course, both: David Frost is some kind of poster child for a person promoted far beyond his rather limited competence. <a href="#2021-11-30-the-way-down-footnote-4-return">↩</a></p></li>
<li id="2021-11-30-the-way-down-footnote-5-definition" class="footnote-definition">
<p>For Johnson CV19 is perhaps a blessing in disguise. Many thousands have died and many thousands more will die through his incompetence and carelessness. This is something he cares nothing about of course, since these are other people. But the enormous costs of CV19 will obscure the true costs of his brexit, and he <em>does</em> care about that. <a href="#2021-11-30-the-way-down-footnote-5-return">↩</a></p></li>
<li id="2021-11-30-the-way-down-footnote-6-definition" class="footnote-definition">
<p>But it may not: I don’t think the data is clear yet, although people who should know such as the <a href="https://www.bbc.co.uk/news/business-59426353">CEO of Moderna</a> are clearly quite worried about it. It may also be less deadly. <a href="#2021-11-30-the-way-down-footnote-6-return">↩</a></p></li>
<li id="2021-11-30-the-way-down-footnote-7-definition" class="footnote-definition">
<p><em>A l’exemple de Saturne, la révolution dévore ses enfants</em> – Jaques Mallet du Pan <a href="#2021-11-30-the-way-down-footnote-7-return">↩</a></p></li>
<li id="2021-11-30-the-way-down-footnote-8-definition" class="footnote-definition">
<p>A coup within the tory party is probably our last, best hope. But Johnson has such a strong grip that it is by no means certain: it is frightening to plot against the man on whom you depend for career advancement. Still, we must hope. <a href="#2021-11-30-the-way-down-footnote-8-return">↩</a></p></li></ol></div>The endless droning: corrections and clarificationsurn:https-www-tfeb-org:-fragments-2021-11-25-the-endless-droning-corrections-and-clarifications2021-11-25T13:05:57Z2021-11-25T13:05:57ZTim Bradshaw
<p>It seems that <a href="https://www.tfeb.org/fragments/2021/11/22/the-endless-droning">my article</a> about the existence in the Lisp community of rather noisy people who seem to enjoy complaining rather than fixing things has atracted some interest. Some things in it were unclear, and some other things seem to have been misinterpreted: here are some corrections and clarifications.</p>
<!-- more-->
<p>First of all some people pointed out, correctly, that LispWorks is expensive if you live in a low-income country. That’s true: I should have been clearer that I believe the phenonenon I am describing is exclusively a rich-world one. I may be incorrect but I have never heard anyone from a non-rich-world country doing this kind of destructuve whining.</p>
<p>It may also have appeared that I am claiming that <em>all</em> Lisp people do this: I’m not. I think the number of people is very small, and that it has always been small. But they are very noisy and even a small number of noisy people can be very destructive.</p>
<p>Some people seem to have interpreted what I wrote as saying that the current situation was fine and that Emacs / SLIME / SLY was in fact the best possible answer. Given that my second sentence was</p>
<blockquote>
<p>[Better IDEs] would obviously be desirable.</p></blockquote>
<p>this is a curious misreading. Just in case I need to make the point any more strongly: I don’t think that Emacs is some kind of be-all and end-all: better IDEs would be very good. But I also don’t think Emacs is this insurmountable barrier that people pretend it is, and I also very definitely think that some small number of people are claiming it is <em>because they want to lose</em>.</p>
<p>I should point out that this claim that it is not an insurmountable barrier comes from some experience: I have taught people Common Lisp, for money, and I’ve done so based on at least three environments:</p>
<ul>
<li>LispWorks;</li>
<li>Something based around Emacs and a CL running under it;</li>
<li>Genera.</li></ul>
<p>None of those environments presented any significant barrier. I think that LW was probably the most liked but none of them got in the way or put people off.</p>
<p>In summary: I don’t think that the current situation is ideal, and if you read what I wrote as saying that you need to read more carefully. I <em>do</em> think that the current situation is not going to deter anyone seriously interested and is very far from the largest barrier to becoming good at Lisp. I <em>do</em> think that, if you want to do something to make the situation better then you should do it, not hang around on reddit complaining about how awful it is, but that there are a small number of noisy people who do exactly that because, for them, <em>no</em> situation would be ideal because what they want is to <em>avoid</em> being able to get useful work done. Those people, unsurprisingly, often become extremely upset when you confront them with this awkward truth about themselves. They are also extremely destructive influences on any discussion around Lisp. (Equivalents of these noisy people exist in other areas, of course.) That’s one of the reasons I no longer participate in the forums where these people tend to exist.</p>
<hr />
<p>(Thanks to an ex-colleague for pointing out that I should perhaps post this.)</p>Old man yells at cloudurn:https-www-tfeb-org:-fragments-2021-11-22-old-man-yells-at-cloud2021-11-22T16:18:31Z2021-11-22T16:18:31ZTim Bradshaw
<p>Bruce Schneier <a href="https://www.schneier.com/blog/archives/2021/11/crypto-means-cryptography-not-cryptocurrency.html">is cross that ‘crypto’ no longer means what he wants it to mean</a>.</p>
<!-- more-->
<p>Here’s the thing: words in a natural language <em>mean what the users of that language want them to mean</em>. God did not hand down English on stone tablets on the top of a mountain to you, or to anyone: a very large number of people invented it, all on their own.</p>
<p>And the meanings of the words in a language as well as its grammar can and do change over time and from place to place. Do you really think you understand Shakespeare’s English? Because you probably do not understand it very well. And if you think you understand Chaucer’s English either you are a specialist or you are very confused.</p>
<p>No number of <em>ex-cathedra</em> pronouncements that ‘in English this word means that’ and ‘in English this bit of grammar is OK and this bit is not’ is going to make those things be true unless enough people agree with you. And what is true here and today may not be true there and tomorrow.</p>
<p>Once ‘flux’ meant something that you died from if you were unlucky: now it does not mean that. Today it means, in physics, the flow of some quantity across some area and in more general usage a state of change<sup><a href="#2021-11-22-old-man-yells-at-cloud-footnote-1-definition" name="2021-11-22-old-man-yells-at-cloud-footnote-1-return">1</a></sup>. Somewhere someone is cross about that change in meaning.</p>
<p>Once ‘hacker’ meant someone who spent long hours writing clever programs. Now it does not mean that. The person writing this comment is annoyed about that change in meaning.</p>
<p>Once there was a language (spoken by people who somehow managed to bargain enjoying watching other people getting eaten by wild animals into being regarded as the civilization we should all aspire to — perhaps that <em>is</em> what some of us aspire to) in which infinitives did not have particles. Many hundreds of years later fools vomit forth endless diatribes about how this means certain constructs are not allowed in a language (not a descendant of that ancient language, not even very closely related to it) which does.</p>
<p>Well, OK, why am I expending all this effort on an old man yelling at a cloud? He’s harmless, right, if foolish? No, he’s not harmless. Yes, he is foolish, but people who seek to control the language spoken by others — and always to control it in such a way that the language <em>they</em> use is correct and the language various other groups use is incorrect — are not harmless, not even slightly.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2021-11-22-old-man-yells-at-cloud-footnote-1-definition" class="footnote-definition">
<p>And those meaninngs <em>are not the same</em> because the language of physics is not the same language as English. <a href="#2021-11-22-old-man-yells-at-cloud-footnote-1-return">↩</a></p></li></ol></div>The endless droningurn:https-www-tfeb-org:-fragments-2021-11-22-the-endless-droning2021-11-22T12:36:25Z2021-11-22T12:36:25ZTim Bradshaw
<p>Someone <a href="https://www.reddit.com/r/lisp/comments/qz0a3j/why_there_is_no_new_modern_common_lisp_ide/">asked about better Lisp IDEs on reddit</a>. Such things would obviously be desirable. But the comments are entirely full the usual sad endless droning from people who need there always to be something preventing them from doing what they pretend to want to do, and are happy to invent such barriers where none really exist. comp.lang.lisp lives on in spirit if not in fact.</p>
<p>[The rest of this article is a lot ruder than the above and I’ve intentionally censored it from the various feeds. See also <a href="https://www.tfeb.org/fragments/2021/11/25/the-endless-droning-corrections-and-clarifications">corrections and clarifications</a>.]</p>
<!-- more-->
<p>First of all it is nice to see people dismissing LispWorks because it’s ‘too expensive’. LW actually <em>has</em> an IDE and it actually <em>does</em> provide an editor which (while an Emacs inside) can pretend to be a native mac or windows editor. And it’s portable: you can develop on Windows and then build and deploy on Linux and that just works, and has done for at least two decades. But it’s ‘too expensive’: a new license for LW might cost the equivalent of a few days of employing a programmer, and the support on that license (which gets you upgrades for ever) might be a day or so. If that’s ‘too expensive’ then your costing is so fucked you might as well give up now and become a beggar. (The announcement of the Haskell IDE which triggered the post is for a commercial one, by the way, so let’s not have any ‘oh, but it’s not ideologically pure’ noise, thanks.)</p>
<p>And then we get the endless ‘things were better on ⟨<em>ancient technology of your choice</em>⟩’. Here’s the thing: I used both Symbolics and Interlisp-D based systems, extensively. They weren’t better than the LW IDE. They had one or two neat features that the LW IDE doesn’t because it’s hard to do on modern hardware, but they were not better. In the case of Interlisp-D systems it took a couple of weeks of practice before you could even use the thing for more than ten minutes without spending most of the time wondering what some front panel code meant (it always meant ‘I have crashed for reasons I cannot explain and you have lost your work and must now reload the sysout and that will take half an hour’) and how to restart it. That was … harder than learning Emacs. Those ancient systems might have been better than Emacs/SLIME … but they might not, I am not sure. But always, always there is the endless mindless droning from people mourning some distant lost golden age: well, I was <em>there</em> and that golden age never existed.</p>
<p>And then there’s the ‘but the new programmers find Emacs hard’. Seriously? Because people starting to learn Lisp are learning a language whose key idea is that it is a programming language <em>in which you write programming languages</em>. Lisp makes doing far more possible than other languages, but nothing is ever going to make it easy because designing programming languages turns out to be hard. Lisp is a language all of whose interesting features are intellectually difficult ideas. If you are put off Lisp by having to learn some different keys to press, <em>give up now</em> and learn Python or some other intellectually undemanding language instead, because Emacs is not remotely the hardest thing you are going to have do deal with. This is like people doing maths degrees complaining about the squiggly Greek characters: if that’s putting you off maths, <em>don’t do maths</em>. OK, ζ and ξ are kind of fiddly to write, but understanding what a Banach space is actually <em>is</em> hard. And, by the way, at some point you <em>are</em> going to have to learn LaTeX, and if you think Emacs is hard, you have a whole other think coming.</p>
<p>Oh, and by the way, I’ve worked somewhere where large numbers of people from non-programming backgrounds wrote vast masses of Python. How did they do it? They used Emacs: some of them probably used vi or vim. But they were actual scientists so they know what hard things are, and knew that learning Emacs was not one of those things.</p>
<p>And finally, there’s a long diatribe from someone listing all the steps they had to go through to get a CL IDE set up on a machine. This same person claims to have run teams of Lisp programmers. Well, there’s this idea called <em>programming</em>: if you have a long laborious set of tasks to do more than once <em>you write a program to do that for you</em>. And yes, I have done just that.</p>
<hr />
<p>All of these people <em>want to lose</em>: they need there always to be something in the way that prevents them getting whatever it is they pretend to want to do done. If such a barrier is removed <em>they will build a new one</em>: I know this because I have done just that and watched them build their new barrier so they could avoid actually doing anything and keep complaining. These barriers <em>do not exist</em>: if you want a cross-platform IDE for Lisp <a href="http://www.lispworks.com/"><em>that IDE exists</em></a>. If you don’t want to use a commercial product, Emacs and SLIME/SLY are free, and fine. And yes there is a learning curve which is somewhat steep, but <em>intellectually difficult things have steep learning curves</em>: if you’re going to become a productive mathematician you are going to go through four years of very steep learning curve indeed, and if you’re going to become a productive Lisp programmer you’re going to go through a learning curve perhaps a tenth or less as hard as that, of which Emacs is one tiny part. If you’re not up to that, <em>don’t write Lisp</em>.</p>
<p>And if what you enjoy doing is whining in public about how things are always in your way then <em>fuck off</em>.</p>The proper use of macros in Lispurn:https-www-tfeb-org:-fragments-2021-11-11-the-proper-use-of-macros-in-lisp2021-11-11T14:32:11Z2021-11-11T14:32:11ZTim Bradshaw
<p>People learning Lisp often try to learn how to write macros by taking an existing function they have written and turning it into a macro. This is a mistake: macros and functions serve different purposes and it is almost never useful to turn functions into macros, or macros into functions.</p>
<!-- more-->
<p>Let’s say you are learning Common Lisp<sup><a href="#2021-11-11-the-proper-use-of-macros-in-lisp-footnote-1-definition" name="2021-11-11-the-proper-use-of-macros-in-lisp-footnote-1-return">1</a></sup>, and you have written a fairly obvious factorial function based on the natural mathematical definition: if \(n \in \mathbb{N}\), then</p>
<p>\[
n! =
\begin{cases}
1 &n \le 1\\
n \times (n - 1)! &n > 1
\end{cases}
\]</p>
<p>So this gives you a fairly obvious recursive definition of <code>factorial</code>:</p>
<pre class="brush: lisp"><code>(defun factorial (n)
(if (<= n 1)
1
(* n (factorial (1- n )))))</code></pre>
<p>And so, you think you want to learn about macros so can you write <code>factorial</code> as a macro? And you might end up with something like this:</p>
<pre class="brush: lisp"><code>(defmacro factorial (n)
`(if (<= ,n 1)
1
(* ,n (factorial ,(1- n )))))</code></pre>
<p>And this superficially seems as if it works:</p>
<pre class="brush: lisp"><code>> (factorial 10)
3628800</code></pre>
<p>But it doesn’t, in fact, work:</p>
<pre class="brush: lisp"><code>> (let ((x 3))
(factorial x))
Error: In 1- of (x) arguments should be of type number.</code></pre>
<p>Why doesn’t this work and can it be fixed so it does? If it can’t what has gone wrong and how are macros meant to work and what are they useful for?</p>
<p>It can’t be fixed so that it works. trying to rewrite functions as macros is a bad idea, and if you want to learn what is interesting about macros you should not start there.</p>
<p>To understand why this is true you need to understand what macros actually <em>are</em> in Lisp.</p>
<h2 id="what-macros-are-a-first-look">What macros are: a first look</h2>
<p><strong>A macro is a function whose domain and range is <em>syntax</em>.</strong></p>
<p>Macros <em>are</em> functions (quite explicitly so in CL: you can get at the function of a macro with <code>macro-function</code>, and this is something you can happily call the way you would call any other function), but they are functions whose domain and range is <em>syntax</em>. A macro is a function whose argument is a language whose syntax includes the macro and whose value, when called on an instance of that language, is a language whose syntax <em>doesn’t</em> include the macro. It may work recursively: its value may be a language which includes the same macro but in some simpler way, such that the process will terminate at some point.</p>
<p>So the job of macros is to provide a family of extended languages built on some core Lisp which has no remaining macros, only functions and function application, special operators & special forms involving them and literals. One of those languages is the language we call Common Lisp, but the macros written by people serve to extend this language into a multitude of variants.</p>
<p>As an example of this I often write in a language which is like CL, but is extended by the presence of a number of extra constructs, one of which is called ITERATE (but it predates the well-known one and is not at all the same):</p>
<pre class="brush: lisp"><code>(iterate next ((x 1))
(if (< x 10)
(next (1+ x))
x)</code></pre>
<p>is equivalent to</p>
<pre class="brush: lisp"><code>(labels ((next (x)
(if (< x 10)
(next (1+ x))
x)))
(next 1))</code></pre>
<p>Once upon a time when I first wrote <code>iterate</code>, it used to manually optimize the recursive calls to jumps in some cases, because the Symbolics I wrote it on didn’t have tail-call elimination. That’s a non-problem in LispWorks<sup><a href="#2021-11-11-the-proper-use-of-macros-in-lisp-footnote-2-definition" name="2021-11-11-the-proper-use-of-macros-in-lisp-footnote-2-return">2</a></sup>. Anyone familiar with Scheme will recognise <code>iterate</code> as named <code>let</code>, which is where it came from (once, I think, it was known as <code>nlet</code>).</p>
<p><code>iterate</code> is implemented by a function which maps from the language which includes it to a language which doesn’t include it, by mapping the syntax as above.</p>
<p>So compare this with a factorial function: factorial is a function whose domain is natural numbers and whose range is also natural numbers, and it has an obvious recursive definition. Well, natural numbers are part of the syntax of Lisp, but they’re a tiny part of it. So implementing factorial as a macro is, really, a hopeless task. What should</p>
<pre class="brush: lisp"><code>(factorial (+ x y (f z)))</code></pre>
<p>Actually do when considered as a mapping between languages? Assuming you are using the recursive definition of the factorial function then the answer is it can’t map to anything useful at all: a function which implements that recursive definition simply has to be called at run time. The very best you could do would seem to be this:</p>
<pre><code>(defun fact (n)
(if (< n 3)
n
(* n (fact (1- n)))))
(defmacro factorial (expression)
`(fact ,expression))</code></pre>
<p>And that’s not a useful macro (but see below).</p>
<p>So the answer is, again, that macros are functions which map between <em>languages</em> and they are useful where you want a new <em>language</em>: not just the same language with extra functions in it, but a language with new control constructs or something like that. If you are writing functions whose range is something which is not the syntax of a language built on Common Lisp, <em>don’t write macros</em>.</p>
<h2 id="what-macros-are-a-second-look">What macros are: a second look</h2>
<p><strong>Macroexpansion is compilation.</strong></p>
<p>A function whose domain is one language and whose range is another is a <em>compiler</em> for the language of the domain, especially when that language is somehow richer than the language of the range, which is the case for macros.</p>
<p>But it’s a simplification to say that <em>macros</em> are this function: they’re not, they’re only part of it. The actual function which maps between the two languages is made up of macros <em>and the macroexpander provided by CL itself</em>. The macroexpander is what arranges for the functions defined by macros to be called in the right places, and also it is the thing which arranges for various recursive macros to actually make up a recurscive function. So it’s important to understand that the macroexpander is a critical part of the process: macros on their own only provide part of it.</p>
<h2 id="an-example-two-versions-of-a-recursive-macro">An example: two versions of a recursive macro</h2>
<p>People often say that you should not write recursive macros, but this prohibition on recursive macros is pretty specious: they’re just fine. Consider a language which only has <code>lambda</code> and doesn’t have <code>let</code>. Well, we can write a simple version of <code>let</code>, which I’ll call <code>bind</code> as a macro: a function which takes this new language and turns it into the more basic one. Here’s that macro:</p>
<pre class="brush: lisp"><code>(defmacro bind ((&rest bindings) &body forms)
`((lambda ,(mapcar #'first bindings) ,@forms)
,@(mapcar #'second bindings)))</code></pre>
<p>And now</p>
<pre class="brush: lisp"><code>> (bind ((x 1) (y 2))
(+ x y))
(bind ((x 1) (y 2)) (+ x y))
-> ((lambda (x y) (+ x y)) 1 2)
3</code></pre>
<p>(These example expansions come via use of my <a href="https://tfeb.github.io/tfeb-lisp-hax/#tracing-macroexpansion-trace-macroexpand">trace-macroexpand package</a>, available in a good Lisp near you: see appendix for configuration).</p>
<p>So now we have a language with a binding form which is more convenient than <code>lambda</code>. But maybe we want to be able to bind sequentially? Well, we can write a <code>let*</code> version, called <code>bind*</code>, which looks like this</p>
<pre class="brush: lisp"><code>(defmacro bind* ((&rest bindings) &body forms)
(if (null (rest bindings))
`(bind ,bindings ,@forms)
`(bind (,(first bindings))
(bind* ,(rest bindings) ,@forms))))</code></pre>
<p>And you can see how this works: it checks if there’s just one binding in which case it’s just <code>bind</code>, and if there’s more than one it peels off the first and then expands into a <code>bind*</code> form for the rest. And you can see this working (here both <code>bind</code> and <code>bind*</code> are being traced):</p>
<pre class="brush: lisp"><code>> (bind* ((x 1) (y (+ x 2)))
(+ x y))
(bind* ((x 1) (y (+ x 2))) (+ x y))
-> (bind ((x 1)) (bind* ((y (+ x 2))) (+ x y)))
(bind ((x 1)) (bind* ((y (+ x 2))) (+ x y)))
-> ((lambda (x) (bind* ((y (+ x 2))) (+ x y))) 1)
(bind* ((y (+ x 2))) (+ x y))
-> (bind ((y (+ x 2))) (+ x y))
(bind ((y (+ x 2))) (+ x y))
-> ((lambda (y) (+ x y)) (+ x 2))
(bind* ((y (+ x 2))) (+ x y))
-> (bind ((y (+ x 2))) (+ x y))
(bind ((y (+ x 2))) (+ x y))
-> ((lambda (y) (+ x y)) (+ x 2))
4</code></pre>
<p>You can see that, in this implementation, which is LW again, some of the forms are expanded more than once: that’s not uncommon in interpreted code: since macros should generally be functions (so, not have side-effects) it does not matter that they may be expanded multiple times. Compilation will expand macros and then compile the result, so all the overhead of macroexpansion happend ahead of run-time:</p>
<pre class="brush: lisp"><code> (defun foo (x)
(bind* ((y (1+ x)) (z (1+ y)))
(+ y z)))
foo
> (compile *)
(bind* ((y (1+ x)) (z (1+ y))) (+ y z))
-> (bind ((y (1+ x))) (bind* ((z (1+ y))) (+ y z)))
(bind ((y (1+ x))) (bind* ((z (1+ y))) (+ y z)))
-> ((lambda (y) (bind* ((z (1+ y))) (+ y z))) (1+ x))
(bind* ((z (1+ y))) (+ y z))
-> (bind ((z (1+ y))) (+ y z))
(bind ((z (1+ y))) (+ y z))
-> ((lambda (z) (+ y z)) (1+ y))
foo
nil
nil
> (foo 3)
9</code></pre>
<p>There’s nothing wrong with macros like this, which expand into simpler versions of themselves. You just have to make sure that the recursive expansion process is producing successively simpler bits of syntax and has a well-defined termination condition.</p>
<p>Macros like this are often called ‘recursive’ but they’re actually not: the function associated with <code>bind*</code> does not call itself. What <em>is</em> recursive is the function implicitly defined by the combination of the macro function and the macroexpander: the <code>bind*</code> function simply expands into a bit of syntax which it knows will cause the macroexpander to call it again.</p>
<p>It is possible to write <code>bind*</code> such that the macro function <em>itself</em> is recursive:</p>
<pre class="brush: lisp"><code>(defmacro bind* ((&rest bindings) &body forms)
(labels ((expand-bind (btail)
(if (null (rest btail))
`(bind ,btail
,@forms)
`(bind (,(first btail))
,(expand-bind (rest btail))))))
(expand-bind bindings)))</code></pre>
<p>And now compiling <code>foo</code> again results in this output from tracing macroexpansion:</p>
<pre class="brush: lisp"><code>(bind* ((y (1+ x)) (z (1+ y))) (+ y z))
-> (bind ((y (1+ x))) (bind ((z (1+ y))) (+ y z)))
(bind ((y (1+ x))) (bind ((z (1+ y))) (+ y z)))
-> ((lambda (y) (bind ((z (1+ y))) (+ y z))) (1+ x))
(bind ((z (1+ y))) (+ y z))
-> ((lambda (z) (+ y z)) (1+ y))</code></pre>
<p>You can see that now all the recursion happens within the macro function for <code>bind*</code> itself: the macroexpander calls <code>bind*</code>’s macro function just once.</p>
<p>While it’s possible to write macros like this second version of <code>bind*</code>, it is normally easier to write the first version and to allow the combination of the macroexpander and the macro function to implement the recursive expansion.</p>
<hr />
<h2 id="two-historical-uses-for-macros">Two historical uses for macros</h2>
<p>There are two uses for macros — both now historical — where they <em>were</em> used where functions would be more natural.</p>
<p>The first of these is <em>function inlining</em>, where you want to avoid the overhead of calling a small function many times. This overhead was a lot on computers made of cardboard, as all computers were, and also if the stack got too deep the cardboard would tear and this was bad. It makes no real sense to inline a recursive function such as the above <code>factorial</code>: how would the inlining process terminate? But you could rewrite a factorial function to be explicitly iterative:</p>
<pre class="brush: lisp"><code>(defun factorial (n)
(do* ((k 1 (1+ k))
(f k (* f k)))
((>= k n) f)))</code></pre>
<p>And now, if you have very many calls to <code>factorial</code>, you wanted to optimise the function call overhead away, <em>and it was 1975</em>, you might write this:</p>
<pre class="brush: lisp"><code>(defmacro factorial (n)
`(let ((nv ,n))
(do* ((k 1 (1+ k))
(f k (* f k)))
((>= k nv) f))))</code></pre>
<p>And this has the effect of replacing <code>(factorial n)</code> by an expression which will compute the factorial of <code>n</code>. The cost of that is that <code>(funcall #'factorial n)</code> is not going to work, and <code>(funcall (macro-function 'factorial) ...)</code> is never what you want.</p>
<p>Well, that’s what you did in 1975, because Lisp compilers were made out of the things people found down the sides of sofas. Now it’s no longer 1975 and you just tell the compiler that you want it to inline the function, please:</p>
<pre class="brush: lisp"><code>(declaim (inline factorial))
(defun factorial (n) ...)</code></pre>
<p>and it will do that for you. So this use of macros is now purely historicl.</p>
<p>The second reason for macros where you really want functions is computing things at compile time. Let’s say you have lots of expressions like <code>(factorial 32)</code> in your code. Well, you could do this:</p>
<pre class="brush: lisp"><code>(defmacro factorial (expression)
(typecase expression
((integer 0)
(factorial/fn expression))
(number
(error "factorial of non-natural literal ~S" expression))
(t
`(factorial/fn ,expression))))</code></pre>
<p>So the <code>factorial</code> macro checks to see if its argument is a literal natural number and will compute the factorial of it at macroexpansion time (so, at compile time or just before compile time). So a function like</p>
<pre class="brush: lisp"><code>(defun foo ()
(factorial 32))</code></pre>
<p>will now compile to simply return <code>263130836933693530167218012160000000</code>. And, even better, there’s some compile-time error checking: code which is, say, <code>(factorial 12.3)</code> will cause a compile-time error.</p>
<p>Well, again, this is what you would do if it was 1975. It’s not 1975 any more, and CL has a special tool for dealing with just this problem: compiler macros.</p>
<pre class="brush: lisp"><code>(defun factorial (n)
(do* ((k 1 (1+ k))
(f k (* f k)))
((>= k n) f)))
(define-compiler-macro factorial (&whole form n)
(typecase n
((integer 0)
(factorial n))
(number
(error "literal number is not a natural: ~S" n))
(t form)))</code></pre>
<p>Now <code>factorial</code> is a function and works the way you expect — <code>(funcall #'factoial ...)</code> will work fine. But the compiler knows that if it comes across <code>(factorial ...)</code> then it should give the compiler macro for <code>factorial</code> a chance to say what this expression should actually be. And the compiler macro does an explicit check for the argument being a literal natural number, and if it is computes the factorial at compile time, and the same check for a literal number which is not a natural, and finally just says ’I don’t know, call the function’. Note that the compiler macro itself calls <code>factorial</code>, but since the argument isn’t a literal there’s no recursive doom.</p>
<p>So this takes care of the other antique use of macros where you would expect functions. And of course you can combine this with inlining and it will all work fine: you can write functions which will handle special cases via compiler macros and will otherwise be inlined.</p>
<p>That leaves macros serving the purpose they are actually useful for: building languages.</p>
<hr />
<h2 id="appendix-setting-up-trace-macroexpand">Appendix: setting up <code>trace-macroexpand</code></h2>
<pre class="brush: lisp"><code>(use-package :org.tfeb.hax.trace-macroexpand)
;;; Don't restrict print length or level when tracing
(setf *trace-macroexpand-print-level* nil
*trace-macroexpand-print-length* nil)
;;; Enable tracing
(trace-macroexpand)
;;; Trace the macros you want to look at ...
(trace-macro ...)
;;; ... and ntrace them
(untrace-macro ...)</code></pre>
<hr />
<div class="footnotes">
<ol>
<li id="2021-11-11-the-proper-use-of-macros-in-lisp-footnote-1-definition" class="footnote-definition">
<p>All the examples in this article are in Common Lisp except where otherwise specified. Other Lisps have similar considerations, although macros in Scheme are not explicitly functions in the way they are in CL. <a href="#2021-11-11-the-proper-use-of-macros-in-lisp-footnote-1-return">↩</a></p></li>
<li id="2021-11-11-the-proper-use-of-macros-in-lisp-footnote-2-definition" class="footnote-definition">
<p>This article originated as a message on the <code>lisp-hug</code> mailing list for <a href="http://www.lispworks.com/">LispWorks</a> users. References to ‘LW’ mean LispWorks, although everything here should apply to any modern CL. (In terms of tail call elimination I would define a CL which does not eliminate tail self-calls in almost all cases under reasonable optimization settings as pre-modern: I don’t use such implementations.) <a href="#2021-11-11-the-proper-use-of-macros-in-lisp-footnote-2-return">↩</a></p></li></ol></div>A letter to my MPurn:https-www-tfeb-org:-fragments-2021-11-04-a-letter-to-my-mp2021-11-04T09:53:11Z2021-11-04T09:53:11ZTim Bradshaw
<p>On the occasion of the johnsonites’ rewriting the rules on political corruption to suit themselves.</p>
<!-- more-->
<p>So, England essentially now has two political parties: one which, though deeply flawed, represents democracy, and the other one: the once-great party to which you belong and which now represents nothing but its own greed, obvious corruption and lies. The party of a laughing idiot clown whose incompetence and gross stupidity has caused tens of thousands of deaths in the last two years and who, no doubt, will kill tens of thousands more in the next few years. The party of a man who, somehow, will be found not to be responsible for the heaped corpses on which he stands. The party whose foreign aid cuts will probably kill hundreds of thousands (but, you know, mostly poor black people and no-one in your party cares about them, do they?). A party of liars and cheats. A party which thinks nothing of changing the rules so its own corrupt MPs are let off. A party of flag-waving little Englanders and racists.</p>
<p>You must be very proud of yourself to represent such an organisation. For myself, I am now ashamed to be English, ashamed to live in your constituency, ashamed of everything your party has come to stand for.</p>
<p>I hope you sleep well. Please, don’t reply.</p>The best Lispurn:https-www-tfeb-org:-fragments-2021-11-03-the-best-lisp2021-11-03T12:03:44Z2021-11-03T12:03:44ZTim Bradshaw
<p>People sometimes ask <a href="https://www.reddit.com/r/lisp/comments/qlcza4/best_lisp_dialect/">which is the best Lisp dialect</a>? That’s a category error, and here’s why.</p>
<!-- more-->
<p>Programming in Lisp — any Lisp — is about <em>building languages</em>: in Lisp the way you solve a problem is by building a language — a jargon, or a dialect if you like — to talk about the problem and then solving the problem in that language. Lisps are, quite explicitly, language-building languages.</p>
<p>This is, in fact, how people solve large problems in <em>all</em> programming languages: <a href="https://en.wikipedia.org/wiki/Greenspun's_tenth_rule" title="Greenspun's tenth rule">Greenspun’s tenth rule</a> isn’t really a statement about Common Lisp, it’s a statement that all sufficiently large software systems end up having some hacked-together, informally-specified, half-working <em>language</em> in which the problem is actually solved. Often people won’t understand that the thing they’ve built is in fact a language, but that’s what it is. Everyone who has worked on large-scale software will have come across these things: often they are very horrible, and involve much use of language-in-a-string<sup><a href="#2021-11-03-the-best-lisp-footnote-1-definition" name="2021-11-03-the-best-lisp-footnote-1-return">1</a></sup>.</p>
<p>The Lisp difference is two things: when you start solving a problem in Lisp, you <em>know</em>, quite explicitly, that this is what you are going to do; and the language has wonderful tools which let you incrementally build a series of lightweight languages, ending up with one or more languages in which to solve the problem.</p>
<p>So, after that preface, why is this question the wrong one to ask? Well, if you are going to program in Lisp you are going to be building languages, and you want those languages not to be awful. Lisp makes it it far easier to build languages which are not awful, but it doesn’t prevent you doing so if you want to. And again, anyone who has dealt with enough languages built on Lisps will have come across some which are, in fact, awful.</p>
<p>If you are going to build languages then you need to understand how languages work — what makes a language habitable to its human users (the computer does not care with very few exceptions). That means you will need to be a <em>linguist</em>. So the question then is: how do you become a linguist? Well, we know the answer to that, because there are lots of linguists and lots of courses on linguistics. You might say that, well, those people study <em>natural</em> languages, but that’s irrelevant: natural languages have been under evolutionary pressure for a very long time and they’re really <em>good</em> for what they’re designed for (which is not the same as what programming languages are designed for, but the users — humans — are the same).</p>
<p>So, do you become a linguist by learning French? Or German? Or Latin? Or Cuzco Quechua? No, you don’t. You become a linguist by learning enough about enough languages that you can understand how languages work. A linguist isn’t someone who speaks French really well: they’re someone who understands that French is a Romance language, that German isn’t but has many Romance loan words, that English is closer to German than it is French but got a vast injection of Norman French, which in turn wasn’t that close to modern French, that Swiss German has cross-serial dependencies but Hochdeutsch does not and what that means, and so on. A linguist is someone who understands things about the <em>structure</em> of languages: what do you see, what do you never see, how do different languages do equivalent things? And so on.</p>
<p>The way you become a linguist is not by picking a language and learning it: it’s by looking at lots of languages enough to understand how they work.</p>
<p>If you want to learn to program in Lisp, you will need to become a linguist. The very best way to ensure you fail at that is to pick a ‘best’ Lisp and learn that. There is no best Lisp, and in order to program well in <em>any</em> Lisp you must be exposed to as many Lisps and as many other languages as possible.</p>
<hr />
<p>If you think there’s a distinction between a ‘dialect’, a ‘jargon’ and a ‘language’ then I have news for you: there is. A language is a dialect with a standards committee. (This is stolen from a quote due to Max Weinrich that all linguists know:</p>
<blockquote>
<p>אַ שפּראַך איז אַ דיאַלעקט מיט אַן אַרמיי און פֿלאָט</p></blockquote>
<p>a shprakh iz a dyalekt mit an armey un flot.)</p>
<hr />
<div class="footnotes">
<ol>
<li id="2021-11-03-the-best-lisp-footnote-1-definition" class="footnote-definition">
<p>‘Language-in-a-string’ is where a programming language has another programming language embedded in strings in the outer language. Sometimes programs in that inner programming language will be made up by string concatenation in the outer language. Sometimes that inner language will, in turn, have languages embedded in its strings. It’s a terrible, terrible thing. <a href="#2021-11-03-the-best-lisp-footnote-1-return">↩</a></p></li></ol></div>An age of optimismurn:https-www-tfeb-org:-fragments-2021-10-27-an-age-of-optimism2021-10-27T09:16:22Z2021-10-27T09:16:22ZTim Bradshaw
<p>On the occasion of Rishi Sunak’s budget.</p>
<!-- more-->
<p>Have we just made enemies of our best friends? are there shortages bought about by our own idiocy? well, never mind, be optimistic! Have we just killed tens of thousands of people through our own inaction? do not think of that, that is past now: simply declare the pandemic over and be optimistic! Does the virus not hear our optimism and will tens of thousands more die as a result? do not worry be optimistic! Does physics not listen to the stupid tales of endless exponential growth told by idiot economists? don’t worry, physics cannot stand in the way of our glorious future: be optimistic, that is enough!</p>
<p>Will the climate talks fail, or if they succeed will we not think to implement our grand vaporous proposals: who needs actions when you’ve got words? Will the sunlit uplands, instead, be deserts drying in the sun? will there be water shortages? will there be catastrophic movements of people fleeing as their homes become uninhabitable? will there be resource wars? will billions die? Don’t think of it, it is years away yet and we can push our exponential death cult a little further yet: instead be optimistic!</p>
<p>Are we ruled by idiot clowns who are in the process of dismantling democracy so they may rule for ever? yes, we are: we are those clowns and we will march in glory, right arms raised, into our brilliant thousand-year reign: be optimistic! The future is ours!</p>
<blockquote>
<p>Nothing on the top but a bucket and a mop
<br />And an illustrated book about birds
<br />You see a lot up there but don’t be scared
<br />Who needs action when you got words
<br />— The Meat Puppets</p></blockquote>Computer insecurityurn:https-www-tfeb-org:-fragments-2021-09-27-computer-insecurity2021-09-27T15:35:02Z2021-09-27T15:35:02ZTim Bradshaw
<p>Making computer systems secure is very difficult. The consequences of insecure systems are already extremely serious and will be catastrophic in future if they are not already. Malignant people, often sponsored by malignant states, are actively attacking computer systems and have had considerable success doing so.</p>
<p>So it is surprising that companies whose stated aims are to increase security are effectively working to make their customers’ systems less secure.</p>
<!-- more-->
<h2 id="managing-large-complex-computing-installations">Managing large, complex computing installations</h2>
<p>For any large, complex computing installation<sup><a href="#2021-09-27-computer-insecurity-footnote-1-definition" name="2021-09-27-computer-insecurity-footnote-1-return">1</a></sup>, simply <em>managing</em> it is a problem. The way of managing a small installation — having someone (part of) whose job is to look after the installation — has terrible scaling problems: if your installation has a million OS instances, then keeping them up to date might involve a hundred thousand people. And if you could afford that many people you still haven’t solved the problem: with a large number of people whose job is to look after parts of the installation there is a vanishingly tiny chance that they will do so consistently.</p>
<p>For systems which are merely <em>large</em> this problem can be made a lot simpler: for such a system the number of components is far larger than the number of tasks the system performs, so there are many components for each task. These components can then be forced to be identical (or identical-enough). The failure of single components simply lowers the capacity of the system in almost all cases. There are still scaling problems — for a system with a huge amount of hardware, hardware failure rates will mean that more of the hardware fails and needs to be replaced, requiring people to actually do the replacement — but much of the management of such a system scales much less than linearly with its size. Finding problems which both can be solved by systems which are merely large and from which money can be made is what made the giant internet companies so rich, of course<sup><a href="#2021-09-27-computer-insecurity-footnote-2-definition" name="2021-09-27-computer-insecurity-footnote-2-return">2</a></sup>.</p>
<p>For systems which are both large and complex the problem is far harder: because such a system is performing a large number of distinct tasks managing it necessarily requires people with expertise in all these tasks, and there are only so many things a person can be good at. Because of this, running such a system is never really scalable. But, if you can isolate various layers of the system — the computing and storage hardware, the operating system, the software platform on which applications live, and so on — then you can make <em>those</em> parts of the system into something which is merely large, and you <em>can</em> manage those in a way which will scale.</p>
<p>This, of course, is exactly what everyone with a large, complex computing installation is trying to do.</p>
<h3 id="single-points-of-control">Single points of control</h3>
<p>The trick to managing a large installation, or the parts of a large, complex installation which can be made merely large, is to have <em>single points of control</em>. For instance, if I want to deploy some update to a very large number of machines, I very definitely don’t want to have to access each machine individually to do that: instead I need to have some single point of control from where I can say ‘deploy this update to this set of machines’ and that will just happen, and I’ll get some kind of report about which machines it worked on and so on.</p>
<p>Making the management of large installations scalable requires these single points of control. They may not be rooms full of dials and flashing lights in hollowed-out volcanos staffed by people in white coats, where occasional klaxons sound (although, of course, they should be), but they have to exist, somewhere: it must be the case that changes to the system can be made in one place, or a very small number of places, and take effect over the whole system. There’s no other way to do this.</p>
<h3 id="a-security-problem">A security problem</h3>
<p>Single points of control present a quite considerable security problem. They are necessary so that the system can be managed efficiently, but it doesn’t say anywhere that the changes made from such a single point of control are <em>good</em> changes. So two things are extremely important:</p>
<ol>
<li>all the single points of control need to be known about and their number should be kept as small as possible;</li>
<li>all the single points of control must be very carefully managed, with extensive controls over access, carefully managed logs and so forth.</li></ol>
<p>I suspect most organisations fail at both of these, unfortunately: they neither keep a careful catalogue of the single points of control and nor do they control access to them carefully enough. This essay, however, is not about how to deal with this problem except in one respect.</p>
<h3 id="transitive-closure">Transitive closure</h3>
<p>To understand what the single points of control are you need to understand the notion of <em>transitive closure</em>. This is pretty simple, fortunately: if a system \(a\) controls a system \(b\), and system \(b\) controls systems \(c, d, \ldots\), then, by transitive closure, system \(a\) controls all of systems \(c, d, \ldots\). And similarly, if \(d\) controls \(g\), then \(a\) also controls \(g\). What this means is that, in order to understand what the single points of control are, you need to construct graphs<sup><a href="#2021-09-27-computer-insecurity-footnote-3-definition" name="2021-09-27-computer-insecurity-footnote-3-return">3</a></sup> of the transitive closure of control. This is not hard to do, but it <em>is</em> quite hard for people to remember these graphs: they really need to exist in some explicit form. Doing this is also a good exercise in making sure you actually do think hard about what the nodes in the graph are: what <em>are</em> the things which grant control over some system, and how are they being managed.</p>
<p>An important thing about this transitive closure of control is that everything gets more sensitive as you go up the graph<sup><a href="#2021-09-27-computer-insecurity-footnote-4-definition" name="2021-09-27-computer-insecurity-footnote-4-return">4</a></sup>: the higher nodes in the graph control more lower nodes, and often very many more lower nodes. If the graph is a tree with a constant branching factor then the number of nodes controlled goes up like \(n!\) as you get higher in the tree, and that’s fast: it’s tempting to say it goes up exponentially, but it doesn’t: it goes up much faster than that.</p>
<p>All of this means that for large installations the points of control near the top of the tree are <em>extremely</em> sensitive: they need to be very tightly controlled indeed. It would be foolish, wouldn’t it, to allow third-parties to manage these points of control?</p>
<h3 id="were-all-fools">We’re all fools</h3>
<p>Of course, we all do exactly that, all the time. We all run software we have neither written nor exhaustively checked<sup><a href="#2021-09-27-computer-insecurity-footnote-5-definition" name="2021-09-27-computer-insecurity-footnote-5-return">5</a></sup>, on hardware we don’t really understand, for instance, and thus outsource our security to the people who write this software and make this hardware. And most of the time it’s OK. Most of the time. Sometimes bad things are found in the software or the hardware and we have to rush around to deal with them. Well, not so much ‘sometimes’ as ‘quite often’ in fact.</p>
<p>But we don’t really have much choice about this: in theory we could build our own hardware and write our own software to run on it as people did in the 1940s and 1950s, but in practice that’s absurdly impractical.</p>
<p>But that’s not where it ends, of course. We now all love our cloud computing: running our software on top of platforms and hardware managed by other people, and keeping our data on their storage systems. Because of course no-one could ever compromise one of these suppliers of computing resources without us realising, quietly changing the cloud platform so it recorded interesting things about what we’re doing. And of course these, very large, computing infrastructures are not managed in turn from single points of control which now, by transitive closure, have control over the computing infrastructures of a huge number of organisations. Oh, wait.</p>
<p>Well this, too, seems to have worked out reasonably well. So far. And this essay is not about the risks of cloud computing.</p>
<h3 id="some-more-than-others">Some more than others</h3>
<p>There are things we can do to control the risks we all take. For instance, when dealing with software we haven’t written or checked in detail, we can carefully run it first in a controlled, isolated environment to try and assess any problems with it. This doesn’t <em>ensure</em> safety — nothing can do that — but it does mean that we have at least some chance of finding out if the new software is broken or malignant.</p>
<p>What we should not be doing is blindly accepting and deploying updates to software into an environment we care about. And we should very, very definitely not be doing that when that software has access to control our systems. If we were to do that, then, by the time we know that the people we’re getting the software from have been compromised, or were perhaps always malignant, it’s far too late: the damage is done. And, worse, we probably will never know what the damage that has been done is.</p>
<h3 id="a-target-painted-on-our-backs">A target painted on our backs</h3>
<p>Points of control which are both far up the graph and well-known have targets painted on their backs. If Dr Evil, President Evil or General Secretary Evil decides that they’d like to compromise a large number of organisations, the things they are going to go for are the points of control which are far up the graph. And they’ll be willing to put a great deal of time, skill and money into this.</p>
<p>Points of control which are far up the graph are, as a result, all but certain to be attacked, and all but certain to be attacked by people with effectively unbounded resources. The only safe assumption to make is that these points of control <em>will</em> be compromised in due course: assuming otherwise is hopelessly naïve.</p>
<p>So you should be very, very careful to test anything you get from such places — especially software, which is far more mutable than hardware. And, if you are in charge of one of these places you should certainly not be suggesting that anyone blindly take your updates: that would be extremely irresponsible.</p>
<p>And yet this is exactly what happens: we are all actively encouraged to blindly trust software we receive from organisations with targets painted on their backs.</p>
<p>And that’s what this essay is about.</p>
<h2 id="insecurity-solutions">Insecurity solutions</h2>
<p>There are many good choices here, but I’ll just pick one: Qualys.</p>
<blockquote>
<p>The Qualys Cloud Platform and its powerful Cloud Agent provide organizations with a single IT, security and compliance solution — from prevention to detection to response! —<a href="https://qualys.com/" title="Qualys">Qualys</a><sup><a href="#2021-09-27-computer-insecurity-footnote-6-definition" name="2021-09-27-computer-insecurity-footnote-6-return">6</a></sup></p></blockquote>
<p>That sounds good, right? Except, wait: they’re providing <em>security</em> solutions. It’s in the nature of such solutions that they both need to be updated very frequently as new threats appear and require privileged access to systems. It almost certainly is not possible to do the kind of staged test and deploy I suggested above for software like this: if there’s a new compromise you want to know about it <em>now</em>, not in two weeks. Instead you really need to just accept updates from Qualys as and when they appear or, perhaps worse, allow them to pull data from your systems to check ‘in the cloud’ where you do not have control over the security of that data. That means that, if you are using Qualys tools on live systems, Qualys are a single point of control for you.</p>
<p>Qualys</p>
<blockquote>
<p>has over 10,300 customers in more than 130 countries, including a majority of the Forbes Global 100. — <a href="https://en.m.wikipedia.org/wiki/Qualys" title="Wikipedia: Qualys">Wikipedia</a></p></blockquote>
<p>That means that they’re a single point of control for a large number of very high-value targets for President Evil: Qualys have a target painted on their back, are illuminated by bright searchlights and are surrounded by flashing neon arrows pointing at the target.</p>
<p>So, well, they’ll know about this, won’t they? And although they can’t avoid being a target to some extent<sup><a href="#2021-09-27-computer-insecurity-footnote-7-definition" name="2021-09-27-computer-insecurity-footnote-7-return">7</a></sup>, they certainly will be addressing these problems to reduce the risk somehow, won’t they? Certainly they will have many documents and guides describing how to minimise the inevitable risk associated with using their products.</p>
<p>Not so much.</p>
<h3 id="how-to-lose-friends-and-alienate-people">How to lose friends and alienate people</h3>
<p>Start from <a href="https://www.qualys.com/documentation" title="Qualys documentation"><code>https://www.qualys.com/documentation</code></a>, then ‘Cloud Platform’ / ‘Scan authentication’ / ‘Unix record’ / ‘online help’ / ‘What credentials should I use?’ / ‘Learn more’ and you should find a link entitled ’*NIX Authenticated Scan Process and Commands’ whose target is <a href="https://success.qualys.com/discussions/s/article/000006220"><code>https://success.qualys.com/discussions/s/article/000006220</code></a><sup><a href="#2021-09-27-computer-insecurity-footnote-8-definition" name="2021-09-27-computer-insecurity-footnote-8-return">8</a></sup>, from which</p>
<blockquote>
<p>When Qualys performs an authenticated scan against a *nix system with a properly configured authentication record we will create an ssh session using the credentials in the authentication record, check the effective UID (level of access), execute “sudo su -” (or other root delegation command configured in the record), re-check effective UID to ensure the elevation worked, then begin our checks.</p></blockquote>
<p><code>sudo su -</code> means ‘become <code>root</code> and spawn a shell’. Or, in other words, gain completely unconstrained access to the system with the highest possible level of privilege. Further down the same page you’ll find this:</p>
<blockquote>
<p>First, customers should be strongly discouraged from placing granular controls around the Qualys service account because of the reasons stated above. […] Even if it were possible to publish this list, it would likely take a lot of effort to maintain its currency.</p></blockquote>
<p>In other words: ‘don’t use fine-grained control to limit what our tool can do, because maintaining the list of commands it might run would be a lot of work for us.’</p>
<p>Yet further down the page is:</p>
<blockquote>
<p>Below is a list of commands that a Qualys service account might run during a scan. Remember not every command is run every time, and *nix distributions differ. This list of commands is neither comprehensive nor actively maintained.</p></blockquote>
<p>This is followed by a list of commands which includes <code>awk</code>(equivalent to uncontrolled <code>root</code> access), <code>firefox</code> (WTF?), <code>java</code> (<code>root</code> access again) and just a huge number of other commands all of which imply unconstrained root access.</p>
<p>That page also links to <a href="https://success.qualys.com/discussions/s/article/000006228"><code>https://success.qualys.com/discussions/s/article/000006228</code></a><sup><a href="#2021-09-27-computer-insecurity-footnote-9-definition" name="2021-09-27-computer-insecurity-footnote-9-return">9</a></sup>. Which contains this obvious falsehood:</p>
<blockquote>
<p>In a nutshell, all of our data point detections are scripts that need to be run as root. Running them as a non-root user would, in most cases, result in permission errors which cannot be distinguished from other error sources. That would result in incorrect data being returned by the scanner, which is why we do not support this. There is no way to make non-root scanning work reliably with a scanning model based on shell commands or shell scripts.</p></blockquote>
<p>It also contains this lovely example of why <code>sudo</code> is no good:</p>
<blockquote>
<p><code>sudo /usr/bin/find . -maxdepth 0 -name . -exec /bin/sh -c "su -" ";" -quit</code></p></blockquote>
<p>This is truly magnificent: anyone who has looked after <code>sudo</code> configuration will know, immediately that <em>this is why you don’t allow unconstrained <code>find</code> in the commands you allow to be run</em>. But apparently the people at Qualys don’t understand that.</p>
<h3 id="the-terrifying-conclusion">The terrifying conclusion</h3>
<p>It is hard to read this material without coming to the conclusion that the people writing it — the people on whom you are relying to check your systems for security — do not care about the security of their customers’ systems if that security might cause momentary inconvenience for them. Worse, it is hard to read this material without coming to the conclusion that the people writing it do not <em>understand</em> the security architecture of *nix systems<sup><a href="#2021-09-27-computer-insecurity-footnote-10-definition" name="2021-09-27-computer-insecurity-footnote-10-return">10</a></sup> at all.</p>
<h3 id="but-they-have-no-choice">But they have no choice</h3>
<p>Well, the people who wrote the documents excerpted above are certainly patronising, and they also seem alarmingly incompetent. But, surely, the problem is real: I can poke fun at them all I like but that doesn’t actually help anything, does it?</p>
<p>This is a security scanner and this means that the things it is checking for change very fast: people who write malware do not give warning of what they are going to do in advance and do not make it easy to know when they are attacking you. When a new attack becomes known about it needs to be checked for <em>right away</em>. And since the nature of the attack can’t be known in advance, the techniques needed to check for it can’t be known in advance, which means both that you will need to allow the scanner to run programs it has just fetched from Qualys, and also that those programs must be able to use all the facilities of the system, at the highest privilege level, to do there work. There’s just no way around this, is there?</p>
<p>And, despite what might appear from reading the above material, we therefore have to assume that everyone at Qualys knows they an enormously attractive target for President Evil and that their security is thus impeccable: we have no choice.</p>
<h3 id="one-of-many">One of many</h3>
<p>And Qualys are just one of many: I have picked on them only because I had to pick on someone. As another example, there’s a company — a very famous company with a three-letter name — who sell a product which, if you install it according to their recommendations, requires you to grant unconstrained <code>root</code> access via <code>sudo</code> to an entire directory containing a huge number of shell scripts some of which are tens of thousands of lines long, and some of which <em>write other shell scripts</em>. The chances of that system not containing security problems are close to zero. But again, we have to trust them, even though the evidence that they don’t even understand what security means is overwhelming: after all they do have a three-letter name.</p>
<p>And this is everywhere you look: we are trusting the security of our systems to people who do not appear to understand what security means.</p>
<h2 id="supply-chain">Supply chain</h2>
<p>Isn’t this all just a bit alarmist? It’s all very well for me to go on about single points of control and companies with targets painted on their backs, but surely nothing bad ever really happens?</p>
<p>If you think that, then you haven’t been paying attention.</p>
<h3 id="solarwinds">SolarWinds</h3>
<p>SolarWinds are a company which write network-management tools used by many other companies, government organisations and others. One of their products is called Orion, which is used by <a href="https://en.wikipedia.org/wiki/SolarWinds" title="SolarWinds (Wikipedia)">about 33,000 public and private-sector organisations</a>. Most or all of those organisations download updates to the product either automatically or semi-automatically. This makes SolarWinds a very attractive target. Starting before October 2019 SolarWinds were compromised and in particular the build system for Orion was compromised in such a way that releases of the product contained malicious code. Between, perhaps, March and December 2020 the attackers used these compromised updates, together with other compromises to attack <a href="https://en.wikipedia.org/wiki/2020_United_States_federal_government_data_breach" title="SolarWinds attack, Wikipedia">at least 200 organisations</a>, including multiple parts of the US federal government, NATO, the UK government, the European parliament, Microsoft and many others. A good description of this attack can be found <a href="https://www.lawfareblog.com/solarwinds-and-holiday-bear-campaign-case-study-classroom" title="SolarWinds and the Holiday Bear Campaign, Lawfare">here</a>. The people who did the attack were the Russian Foreign Intelligence Service, Sluzbha Vneshney Razvedki<sup><a href="#2021-09-27-computer-insecurity-footnote-11-definition" name="2021-09-27-computer-insecurity-footnote-11-return">11</a></sup>. I don’t know what the results of this attack were, and perhaps no-one outside Russia knows what was taken and what will be done with it. It is certainly very safe to say that the results were extremely severe, if not catastrophic.</p>
<p>It’s worth noting that the result of the build system for Orion being compromised was that the compromised releases <em>were properly digitally signed</em>: it is <em>not safe</em> to rely on digital signatures to prove that software has not been compromised in the case where the organisation signing the software has been compromised.</p>
<h3 id="qualys-again">Qualys again</h3>
<p>in early 2021 <a href="https://www.theregister.com/2021/03/03/qualys_ransomware_clop_gang/" title="Qualys ransomware">there was a security breach at Qualys</a>. It seems that this breach didn’t compromise their security tools: they got away with it, this time.</p>
<h3 id="this-is-not-the-end">This is not the end</h3>
<p>These are both <a href="https://en.wikipedia.org/wiki/Supply_chain_attack" title="Supply chain attack, Wikipedia">supply chain attacks</a>: many others have happened, and without doubt many more will happen. In the context of this essay, supply chain attacks are a result of having single points of control for security management which are outside an organisation and which serve many organisations, making them interesting to attackers with large resources.</p>
<p>But what can we do? It is inevitable that these organisations will be attacked, and almost inevitable that they will be compromised. In many cases we can mitigate the risk by having a fairly long test and deployment cycle and hoping that either we find the problems or that others do before we start relying on the tool. For security scanners we can’t do that, because we can’t afford to wait. We have to trust suppliers of security products, and we have to allow them to run privileged code on our systems which we can not check because the alternative of not checking for security compromises is even worse.</p>
<p>We have to trust them because, in fact, we have no other choice.</p>
<h2 id="is-this-the-end">Is this the end?</h2>
<p>So, this seems like an insoluble problem, doesn’t it? A security scanner has terrifying properties, by its nature:</p>
<ul>
<li>it must be updated very frequently, far too frequently to perform safety checks;</li>
<li>it must have privileged access to live systems.</li></ul>
<p>There’s just no way around that, is there? And of course, President Evil knows this too: the organisations providing these tools make <em>extremely</em> good targets because the nature of the tools means both that any compromise is very serious and compromises are very hard to detect. And there is therefore no way around the fact that the suppliers of these tools will be targets for President Evil, will, in due course, be compromised, and all is therefore lost.</p>
<p>Well, perhaps not. Perhaps it is possible to reduce the risk.</p>
<h3 id="a-sketch">A sketch</h3>
<p>The problem to solve is that a security scanner must be updated very frequently and must run with high privilege. Suppliers of such tools, even if they are competent which is not always clear, are extremely valuable targets for attackers with very large resources and thus are almost certain to be compromised. So running these scanners on live systems needs to be avoided, even though the scanners need access to the live systems to run.</p>
<p>Well, there’s a way around that. If you could make an identical copy of any system then you could scan the <em>copy</em>. If the machine has a vulnerability, so will the copy. If the <em>scanner</em> is compromised then it will attack only the copy, which doesn’t matter, since it’s only a copy, which will be destroyed immediately after being scanned.</p>
<p>It is more complicated than that, of course: the copy needs actually to be running as lots of things will almost certainly only really show up when a system is running (what network ports does it have open, for instance). So the copy needs to be more than just a blob of data: it needs to be a real thing running programs. And the copy has to think it’s <em>not</em> a copy: enough of the world around it needs to be faked up so it thinks it’s doing real work. But all of this world must be <em>fake</em> — under no circumstances should the copy be able to see real data or talk to real live systems. Finally, the scanner needs to be very restricted in the data it can upload: since the whole point is that we don’t trust the scanner we can’t allow it to ship all the data on the system to who-knows-where when it’s been compromised. Ideally the scanner should return a single bit: is the thing it is scanning compromised? If it is then this tells us to look more closely at it, for instance by looking at a report stashed locally on the copy.</p>
<p>Doing this is not simple to arrange, but it is perfectly possible. Here are some objections with answers.</p>
<p><strong>But, cloning systems like this is hard, isn’t it?</strong> Not really. For a start, if the systems concerned are virtualised then pretty much all serious hypervisors support making snapshots and clones of the virtual machines they’re running, and moving those snapshots and clones between different physical hardware. If the systems <em>aren’t</em> virtualised then things are harder, but this kind of ‘make a carbon copy of a system’ is what you should already be doing for backup and disaster recovery (DR). Some people, apparently, maintain DR systems by <em>manually</em> keeping them up to date with the live systems. If you are doing that, stop: create the DR systems by cloning the live systems. If you don’t have a good approach to cloning do it by restoring backups. If you can’t restore your backups (or you aren’t making backups) then you are already dead, so nothing matters.</p>
<p><strong>But, this means doubling the size of the environment, doesn’t it?</strong> No: you only need enough extra computational resources to scan each little chunk of your environment, since you can reuse them. But, you <em>already</em> need enough extra resources to support DR: just use those!</p>
<p><strong>But, this will be hard to set up, won’t it?</strong> Yes, it will require a fair amount of work. But if you don’t do this, or something like it, then within the next few years your systems (almost certainly) <em>will</em> be compromised and your data (almost certainly) <em>will</em> leak to bad people as a result. So the question is: is the cost of that higher, or lower, than the cost of this, or something like it?</p>
<p><strong>But, the things that do the cloning can be attacked, can’t they?</strong> Yes, they can. But these tools are a tiny fragment of your infrastructure. They are, in fact, a single point of control, and one you have to be very, very careful about. This sketch doesn’t <em>remove</em> the problem since nothing can do that: it just makes it much less severe and much better controlled.</p>
<p><strong>But, lots of details are missing, aren’t they?</strong> Yes. This is a sketch, written by some person on the internet: it’s not a complete solution. (If you want a complete solution pay me lots of money and I’ll make you one.)</p>
<p><strong>But, you haven’t thought of this thing, and that thing, and …, have you?</strong> No. It’s a <em>sketch</em>.</p>
<h2 id="because-we-want-to">Because we want to</h2>
<p>Solving these problems, in the sense of making them much less likely to happen and the consequences when they do happen much less bad, is not easy. But it is <em>possible</em>, as the sketch in the previous section shows. <em>Not</em> solving them means that, almost certainly, in the next few years a catastrophe will happen. I said at the beginning of the essay</p>
<blockquote>
<p> it is surprising that companies whose stated aims are to increase security are effectively working to make their customers’ systems less secure.</p></blockquote>
<p>But it isn’t, not really: it is depressing, but not really surprising, because the entire history of computing has been made up of people avoiding solving problems through laziness, lack of imagination, or the desire to make a quick buck.</p>
<p>I think that should stop. Solving these problems will be hard, but we can solve them if we only want to.</p>
<hr />
<h2 id="appendix-large-complex-computing-installations">Appendix: ‘large, complex computing installations’</h2>
<p>I’ve used this term above without ever really defining it. Defining it is not entirely easy, and the meanings of definitions change over time: once an IBM System/360 Model 70 might have been thought of as a very large computing installation, but today it would be a very small one other than, perhaps, physically.</p>
<p>Every time I want to write about large computing installations I find I don’t know the right words any more: is a large computing installation one with many systems, or is it one large system? What, anyway, is a ‘system’? Once everyone knew what it meant: the system was the departmental VAX, and later there were several systems which were the VAX (still creaking along on life-support) and a bunch of Suns, some of which were workstations and some of which were fileservers.</p>
<p>But that meaning has dissolved away. For a while it was safe to talk about ‘servers’: everyone knew that a server was something that lived in a rack along with other servers<sup><a href="#2021-09-27-computer-insecurity-footnote-12-definition" name="2021-09-27-computer-insecurity-footnote-12-return">12</a></sup>. But that in turn has dissolved away as the relationship between physical hardware and the programs that run on it becomes more complicated and often more remote.</p>
<p>So what, today, are the right words? What is a large installation and what a small one? Here’s my attempt at a definition.</p>
<ul>
<li>An installation is <strong>large</strong> if it has a very large number of truly concurrent threads of control. ‘Truly concurrent’ means ‘supported by hardware’, and what is meant by ‘very large’ will increase over time: at the time of writing (mid 2021) this probably means at the very least tens of thousands.</li>
<li>An installation is <strong>complex</strong> if it is performing a large number of conceptually distinct tasks. Again the definition of what is a large number may change over time although it will probably increase more slowly than the number of threads of control.</li></ul>
<p>This definition, for instance, would make many HPC systems large, but not complex: although they have large number of independent threads of control, they probably run a rather small number of different programs, and perhaps only one (probably several copies of that one, of course). It’s possible for a system to be complex, but not large, although unusual.</p>
<p>I’m not sure if this definition is adequate, but I think it will serve here.</p>
<p>In the main text I use ‘installation’ and ‘system’ interchangeably: I should probably only use ‘installation’ but I don’t. When I talk about an individual computer in a large installation I’ve tried to say ‘machine’.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2021-09-27-computer-insecurity-footnote-1-definition" class="footnote-definition">
<p>See appendix. <a href="#2021-09-27-computer-insecurity-footnote-1-return">↩</a></p></li>
<li id="2021-09-27-computer-insecurity-footnote-2-definition" class="footnote-definition">
<p>Once upon a time I worked for a then-famous company which sold holidays over the internet. We used to sneer at Amazon for picking a simple problem — mostly selling books, then — to solve: books just sit in a warehouse waiting to be bought, for decades if need be, while everyone wants a different holiday and holidays have very definite sell-by dates. One day I realised that what Amazon had done — picking a simple, scalable problem to solve — was <em>smart</em>, and what we were trying to do was not smart and that was why they were going to get rich and we weren’t. I didn’t get rich, and I don’t know if that company even still exists. <a href="#2021-09-27-computer-insecurity-footnote-2-return">↩</a></p></li>
<li id="2021-09-27-computer-insecurity-footnote-3-definition" class="footnote-definition">
<p>A <em>graph</em> here is not a <em>plot</em>: it’s a drawing of some kind of network consisting of nodes (points of control, for instance) and arcs between those nodes which may or may not have arrows on them indicating direction: if a controls b then there will be a node for a, a node for b and an arrow from a to be indicating control. <a href="#2021-09-27-computer-insecurity-footnote-3-return">↩</a></p></li>
<li id="2021-09-27-computer-insecurity-footnote-4-definition" class="footnote-definition">
<p>By ‘up’ I mean in the direction of ‘is controlled by’ while ‘down’ means in the direction of ‘has control over’. <a href="#2021-09-27-computer-insecurity-footnote-4-return">↩</a></p></li>
<li id="2021-09-27-computer-insecurity-footnote-5-definition" class="footnote-definition">
<p>Of course <a href="http://www.lel.ed.ac.uk/~gpullum/loopsnoop.html" title="Scooping the loop snooper">we <em>can’t</em> exhaustively check software</a> in any case, but we can do a lot better than ‘not checking it at all’. <a href="#2021-09-27-computer-insecurity-footnote-5-return">↩</a></p></li>
<li id="2021-09-27-computer-insecurity-footnote-6-definition" class="footnote-definition">
<p>All the text in this essay was extracted from the linked sources in early September, 2021. Things may have changed since then, but the what is here was there then. I have marked elisions with ’[…]’. <a href="#2021-09-27-computer-insecurity-footnote-6-return">↩</a></p></li>
<li id="2021-09-27-computer-insecurity-footnote-7-definition" class="footnote-definition">
<p>For instance, if Qualys can be compromised in such a way that their tools fail to report other compromises, then this would allow those other compromises to propagate undetected, even if the tools provided by Qualys are not themselves doing direct harm. <a href="#2021-09-27-computer-insecurity-footnote-7-return">↩</a></p></li>
<li id="2021-09-27-computer-insecurity-footnote-8-definition" class="footnote-definition">
<p>This may formerly have been <code>https://qualys-secure.force.com/discussions/s/article/000006220</code>. <a href="#2021-09-27-computer-insecurity-footnote-8-return">↩</a></p></li>
<li id="2021-09-27-computer-insecurity-footnote-9-definition" class="footnote-definition">
<p>May formerly have been <a href="https://qualys-secure.force.com/discussions/s/article/000006228"><code>https://qualys-secure.force.com/discussions/s/article/000006228</code></a> <a href="#2021-09-27-computer-insecurity-footnote-9-return">↩</a></p></li>
<li id="2021-09-27-computer-insecurity-footnote-10-definition" class="footnote-definition">
<p>To be fair, ‘the security architecture of *nix systems’ does give the impression that there is one — that it is something made of marble and stainless steel rather than partly-dissolved mud bricks and rotting straw. <a href="#2021-09-27-computer-insecurity-footnote-10-return">↩</a></p></li>
<li id="2021-09-27-computer-insecurity-footnote-11-definition" class="footnote-definition">
<p>In other words, this time it was indeed President Evil. <a href="#2021-09-27-computer-insecurity-footnote-11-return">↩</a></p></li>
<li id="2021-09-27-computer-insecurity-footnote-12-definition" class="footnote-definition">
<p>Some very large or very old servers might have been whole racks, or even several. <a href="#2021-09-27-computer-insecurity-footnote-12-return">↩</a></p></li></ol></div>Neurodivergenturn:https-www-tfeb-org:-fragments-2021-08-17-neurodivergent2021-08-17T11:02:35Z2021-08-17T11:02:35ZTim Bradshaw
<p>Recently I wrote two articles about <a href="https://www.tfeb.org/fragments/2021/03/24/richard-stallman/" title="Richard Stallman">Richard Stallman</a> (RMS) and the <a href="https://tfeb.org/fragments/2021/07/24/the-lost-cause-of-the-free-software-foundation/" title="The lost cause of the Free Software Foundation">Free Software Foundation</a> (FSF). Someone who is autistic wrote to me and pointed out some unfortunate implications of what I wrote, which were both wrong and offensive to neurodivergent people: I am sorry for that. The remainder of this article is an attempt to correct those mistakes.</p>
<!-- more-->
<h2 id="the-things-i-wrote">The things I wrote</h2>
<p>From <a href="https://www.tfeb.org/fragments/2021/03/24/richard-stallman/" title="Richard Stallman">the first article</a>, with elisions indicated as ’[…]’ and emphasis added.</p>
<blockquote>
<p>[…] I think there is only one conclusion to draw from this: something is badly wrong with his mind which makes it extremely hard for him to understand notions such as consent, and probably other things as well. Perhaps he is someone who deserves sympathy, not contempt. But, <em>like other people who suffer from such problems, he needs to be kept out of situations where he can do harm</em>.</p></blockquote>
<blockquote>
<p>[…] almost certainly he is ill rather than evil: <em>there is simply something which does not work properly in his mind which makes him unable to understand these things</em>.</p></blockquote>
<p>From <a href="https://tfeb.org/fragments/2021/07/24/the-lost-cause-of-the-free-software-foundation/" title="The lost cause of the Free Software Foundation">the second article</a>.</p>
<blockquote>
<p>It seems likely that RMS himself is ill, or at least not neurotypical, rather than malevolant: he almost certainly is someone who really finds it very hard to understand that paedophilia is abhorrent, for instance. And like most people, he wants sex, but unlike most people he fails to understand that the way to get it is not to repeatedly harass women. If so, he is clearly someone who deserves sympathy and understanding. <em>But he also should not be in a position where he has any kind of power over people</em>: after all, psychopaths are also people who are ill, or not neurotypical in a different way, and <em>you definitely don’t want psychopaths in positions of power or responsibility</em>.</p></blockquote>
<p>I have generally assumed that no-one reads any of these articles other than me: I write them only because I have to write, and I’d like it to be <em>possible</em> for other people to read some of what I write. So I’m often a bit casual: that doesn’t make it any better.</p>
<h2 id="a-story-of-two-cats">A story of two cats</h2>
<p>I have a small young cat: I don’t know his background, but it seems likely he was taken somewhere far from his home by whoever had him previously and abandoned there, when he inconveniently stopped being a kitten. Probably most of his previous dealings with other cats were either with his siblings or his mother and what he likes to do is to <em>play</em> in the fierce way that kittens do with each other. He does not mean harm — he bites only very gently and does not really scratch — but he plays the only way he knows how to play.</p>
<p>I also have a much older, much larger, cat who is in middle age and likes the things cats in middle age like: to sleep, to sit on people, to spend his nights outside in the summer and by the stove in winter. He still remembers playing but mostly it is now in his past.</p>
<p>And the young cat worships the older cat and wants to play with him, and the older cat does not like this as he is old and set in his ways. But the older cat is also polite: he understands that the young cat lives here, and he doesn’t want to make the sort of strong point which the young cat would remember and which might involve bits missing from his ears. So he growls and bats at the young cat but goes no further than that. And we chastise the young cat and explain he should not be doing this, unless, occasionally, the older cat wants to play.</p>
<p>And the young cat finds this hard to understand: he is desperate to play but the cat he worships won’t, usually, play with him, but also, being polite, won’t say this in strong enough terms. It is hard for him.</p>
<p>But he is learning: he learned very quickly that when the older cat was eating or drinking no playing was to be done; he has learned that if he is well-behaved outside the older cat will take him to interesting places; he has learned that when the older cat is sitting on someone he is not to play. And in due course he will learn all the rules around playing with the older cat, and they will live comfortably together.</p>
<h2 id="three-mistakes-and-some-more-mistakes">Three mistakes, and some more mistakes</h2>
<p>The young cat is learning what the rules are, even though the rules are in vigorous disagreement with his instincts. <em>And he’s a cat</em>. In what I wrote about RMS quoted above I have implied that non-neurotypical people are <em>less able to learn than a cat</em>: this is just grotesquely insulting to neurodivergent people. Neurodivergent people may not, for instance, get various behavioural clues when approaching someone in whom they are sexually / romantically interested, but they certainly will understand when the person says the approach is unwelcome, and they also certainly will learn how to negotiate this sort of encounter: to imply otherwise is wrong, offensive, and very stupid on my part.</p>
<p>And a consequence of this first mistake is that I said that neurodivergent people should not be allowed to have positions of power over people. That’s wrong. Neurodivergent people, as anyone else, might not <em>want</em> to take such positions but, since they can learn there is no reason to think that they would be any worse at them than anyone else, and in fact they might well be better, since their understanding may be more conscious. Again, I was both wrong and stupid to say this, and it’s an offensive view.</p>
<p>The third mistake is that I implied that neurodivergent people are <em>ill</em>: that’s wrong in a horrible way. They’re not ill, they’re just not neurotypical. Saying neurodivergent people are ill is like saying people who have skin of the ‘wrong’ colour are ill, and just as offensive. Wrong, stupid and offensive, again.</p>
<p>And finally I have, in part, conflated neurodivergent people with psychopaths<sup><a href="#2021-08-17-neurodivergent-footnote-1-definition" name="2021-08-17-neurodivergent-footnote-1-return">1</a></sup>, which is wrong. And then, of course, I’ve gone on to assume that psychopaths <em>also</em> can’t learn and as a result should never be in positions of power over other people. But psychopaths <a href="http://www.twitlonger.com/show/dh5l3q" title="Jon Ronson's letter from a psychopath"><em>can</em> learn if they choose to</a>, and if they do learn then there’s no strong reason why they should not be in positions of power.</p>
<p>All of these mistakes were unintentional, but that doesn’t excuse them: I should have thought more carefully before writing. I’m sorry.</p>
<h2 id="where-this-leads">Where this leads</h2>
<p>The underlying reason I made these mistakes is that I didn’t want to think of RMS as a bad person. To do that I invented this fantasy that he was not bad, but simply, by virtue of his assumed neurodivergence, could not learn that his behaviour was wrong. And that does not fly: RMS is a human being and could learn if he chose to. He hasn’t learned because he is, in fact, a bad person.</p>
<p>He may or may not be be neurodivergent but, without doubt, <strong>Richard Stallman is a bad person</strong> and the people who support him are also bad people<sup><a href="#2021-08-17-neurodivergent-footnote-2-definition" name="2021-08-17-neurodivergent-footnote-2-return">2</a></sup>.</p>
<h2 id="caring-and-not-caring">Caring and not caring</h2>
<p>The person who picked me up on this, and who has read this article, wrote ‘I’m often not very good at expressing myself’. But they also wrote this:</p>
<blockquote>
<p>Some people have trouble figuring out how people feel, but there is no neurological condition that makes it impossible to care how people feel.</p></blockquote>
<p>This is, simply, the most beautiful description I can imagine of what I had not understood. So, thank you.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2021-08-17-neurodivergent-footnote-1-definition" class="footnote-definition">
<p>By ‘neurodivergent’ I mean, I think, ‘autistic spectrum’: I think it’s at least <a href="http://www.shiftjournal.com/2011/10/11/what-is-psychopathys-place-in-neurodiversity/" title="What is psychopathy's place in neurodiversity?">possible to argue</a> that psychopaths & sociopaths are also not neurotypical, but that’s not what I mean here. The language I’m using may be incorrect, and if that’s the case I’ll correct it. <a href="#2021-08-17-neurodivergent-footnote-1-return">↩</a></p></li>
<li id="2021-08-17-neurodivergent-footnote-2-definition" class="footnote-definition">
<p>It is a common excuse to say that ‘I supported this bad person, but I am not myself a bad person’. Yes, yes you are: if you vote for a racist who you know is a racist then <em>you are a racist</em>, and if you support a man who repeatedly harasses women, then you are someone who thinks that this behaviour is acceptable, and you are therefore an awful human being. <a href="#2021-08-17-neurodivergent-footnote-2-return">↩</a></p></li></ol></div>The network is the computerurn:https-www-tfeb-org:-fragments-2021-08-04-the-network-is-the-computer2021-08-04T10:31:35Z2021-08-04T10:31:35ZTim Bradshaw
<p><a href="https://www.bbc.co.uk/news/business-58068998" title="BBC">Rishi Sunak has told people that working from home may hurt their career</a>. Sunak, like many conservatives, is frightened, not only of the future, but of the present.</p>
<!-- more-->
<p>I am quite quite sure that Sunak is not acting in his own or his rich friends’ narrow self-interest here, as they stare horrified at the commercial property portfolios which have made them as rich as they are and wonder what they’re now worth. Quite sure.</p>
<p>So since that can’t be the case (I mean, it couldn’t be, could it?) what’s happening is that Sunak and others like him are simply acting the way you would expect them to behave. They are, well, conservative: they like things to stay the same. Often they are stuck in the past and unwilling to accept change. That hasn’t often been the case for the johnsonite tory party which is not, in fact, a conservative party at all but a quasi-fascist insurgency, but it is the case here.</p>
<p>Because what finally happened, in late March 2020, was the internet. If you are old enough you may remember what the internet was meant to give us. Before the internet, you had to go to some special anointed place where the computers were, with their vast ranks of drums and endlessly-spooling tape drives, served by people in white coats in rooms with glass walls from behind which the onlookers would stare, amazed at the computational resources being deployed for who-knows-what purpose. When the internet came the computers in their cathedrals would dissolve into the landscape, and suddenly they would be <em>everywhere</em>. No longer would you have to go to a special building where the terminals were: you could be anywhere. You could be in a café, in the park, on the beach, at home: anywhere. And you would not have to spend a fifth or more of your waking life in various tin boxes, and all your free time exhausted. Life would be better for almost everyone: everyone except those who owned the rotting, empty palaces full of rusting, unused terminals.</p>
<p>Well, the internet didn’t happen when we thought it would: instead it got parasitized by various soul-eating entities with names like ‘google’ and ‘facebook’ which made it not only almost useless, but usually actively harmful: suddenly, not only did you have to spend three hours in a tin can, you somehow had to find another three hours to spend feeding the parasites. And people forgot what it was meant to have done.</p>
<p>And then, in March 2020, another parasite came. And, because we all now had to stay at home to escape from this new parasite, finally, the internet happened. Finally, many of us are free. At first, of necessity, we all worked from home, but as the new parasite fades we will, finally, be able to work from anywhere: from a coffee shop, from a museum, from an art gallery, from a park; even, perhaps, sometimes from home. But not from some vast factory full of terminals.</p>
<p>And Sunak can’t understand this. For him it will always be the late 20th century: for him that brief period of history that started in the late 19th century and is all he has ever known must last for ever, because for him no change is possible. But change has, finally, come, and he cannot stop it.</p>
<p>At last, the future has come and we are free.</p>
<hr />
<p>An earlier version of this was <a href="https://forums.theregister.com/forum/all/2021/08/03/uk_chancellor_rishi_sunak_returning_to_office/#c_4308332">a comment on The Register</a>.</p>The lost cause of the Free Software Foundationurn:https-www-tfeb-org:-fragments-2021-07-24-the-lost-cause-of-the-free-software-foundation2021-07-24T08:50:53Z2021-07-24T08:50:53ZTim Bradshaw
<p>The Free Software Foundation has <a href="https://www.fsf.org/news/statement-of-fsf-board-on-election-of-richard-stallman" title="Statement of FSF board on election of Richard Stallman">reelected Richard Stallman</a> to its board. At first glance this looks like a wilful act of self-harm by the FSF: RMS has <a href="https://www.tfeb.org/fragments/2021/03/24/richard-stallman/">expressed opinions which are abhorrent</a> and has behaved appallingly towards women, at least. This is to misunderstand both what the cause of the FSF really is and what their options for that cause now are.</p>
<!-- more-->
<p>[What follows is wrong in some important ways: please see <a href="https://www.tfeb.org/fragments/2021/08/17/neurodivergent/">this article</a> which has both corrections and an apology.]</p>
<h2 id="the-cult-of-richard-stallman">The cult of Richard Stallman</h2>
<p>RMS is, to put it rather mildly, someone who a large number of people find <a href="https://www.tfeb.org/fragments/2021/03/24/richard-stallman/" title="Richard Stallman">extremely toxic</a> but who is unsurprisingly also supported by <a href="https://www.theregister.com/2021/04/12/free_software_foundation_doubles_down/" title="FSF doubles down on Richard Stallman's return: Sure, he is 'troubling for some' but we need him, says org">other groups of people</a>. The people who support him are generally exactly the sort of people you would expect to support him — white male programmers — and have exactly the sort of views you would expect: they’re bigots. They’re not people with whom it would be pleasant to work if you were female, not white, or both. In fact they’re not the sort of people with whom it would be pleasant to work at all if you were a decent human being.</p>
<p>It seems likely that RMS himself is ill, or at least not neurotypical, rather than malevolant: he almost certainly is someone who really finds it very hard to understand that paedophilia is abhorrent, for instance. And like most people, he wants sex, but <em>unlike</em> most people he fails to understand that the way to get it is not to repeatedly harass women<sup><a href="#2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-1-definition" name="2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-1-return">1</a></sup>. If so, he is clearly someone who deserves sympathy and understanding. But he also should not be in a position where he has any kind of power over people: after all, psychopaths are <em>also</em> people who are ill, or not neurotypical in a different way, and you definitely don’t want psychopaths in positions of power or responsibility.</p>
<p>Some of the people who support RMS within and outside the FSF are probably also not neurotypical in similar ways. But the great majority are: they are simply the sort of people who believe in the innate superiority of white men, that women are inherently inferior and exist to satisfy the sexual needs of men regardless of their own desires: in other words they are racists and sexists of the worst kind. They also, perhaps, don’t have any serious problem with sex with young girls<sup><a href="#2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-2-definition" name="2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-2-return">2</a></sup>. These people <em>are</em> malevolent. And RMS is a quite convenient figurehead for them as he is enabling them to do exactly what they want to do anyway. While I and many other people believed ten years ago that racism, sexism and other bigotries were fading into the past in many advanced countries, the events of the last decade have made it very clear that this is not the case. Very many people have always held horrible views: between perhaps the late 1970s and the mid 2010s they simply were less willing to speak those views in public. The ascent of ‘populism’ — which really means, among other things, white male supremacy — means that they are no longer so hesitant about expressing their views in public. You don’t have to read far into the comments on, for instance, <a href="https://theregister.com/" title="The Register">The Register</a> to see how common some of these views are<sup><a href="#2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-3-definition" name="2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-3-return">3</a></sup>, and they can also be widely seen elsewhere: this is not something I am making up.</p>
<p>So it seems like what is happening with the FSF is simple: white male programmers have maintained their position of dominance and will continue to drive out everyone else. The FSF continues to be a white male supremacist organisation as it has always implicitly been.</p>
<p>Well, that’s all true, but there is more to it than that.</p>
<h2 id="a-guild">A guild</h2>
<p>The FSF is essentially a <em>guild</em>:</p>
<blockquote>
<p><strong>guild</strong> or <strong>gild</strong> /gild/
<br />[…] A mediaeval association looking after common (<em>esp</em> trading) interests, providing mutual support and protection, and masses for the dead
<br />— Chambers</p></blockquote>
<p>Like all guilds this one’s underlying purpose is to benefit its members, who regard themselves as uniquely, innately blessed to be members of the guild, and to forbid entrance to those they regard as inferior. As with many guilds this is dressed up in what are essentially religious clothes: only those blessed by the god of the guild are allowed to join. Guilds are distinct from unions in this way: anyone can join a union if they pay the fees, but only the elect of god can join the guild. The FSF and much of the culture around the broader free software movement<sup><a href="#2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-4-definition" name="2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-4-return">4</a></sup> isn’t socialist: it’s mediaeval.</p>
<p>The free software guild has also been very successful. Because it has become so dominant in the fields in which it operates it has all but driven out the groups it regards as not blessed from those fields. It’s not currently legal to do this (see below), but since the guild is so dominant it is inevitable that anyone starting work in one of the fields it operates in will encounter guild members, who will then make their lives so miserable that they leave, and pretty quickly non-elect people simply don’t even consider working in those fields. The guild got started in the mid 1980s and you can see its success in the figures. In 1984 <a href="https://tfeb.org/fragments/2020/05/09/sexism-in-computer-science/" title="Sexism in computer science">one group of the non-elect</a> made up 38% of those entering the workforce in one of the guild’s areas: by 2011 they made up under 18%. In areas directly under the control of the guild they now make up under 10% (and may never have made up more than that).</p>
<p>Well, of course mediaeval trade practices are even more hostile to capitalism than socialist ones are: the whole elect-of-god thing is just toxic to capitalism as it restricts the workforce enormously, and the weird religious ornamentation surrounding everything the guild does is also not helping anything. Capitalists want the guild to die or become irrelevant, so their available workforce can be much larger, they can drive down wages to reasonable levels, make more money for themselves and everyone else.</p>
<p>Capitalists are also often working in legal systems which make what the guild is doing illegal, and they are worried about that. So this is a rare case where the desires of the plutocrats and those of decent human beings align: neither wants the bigotry and pseudo-religion that is what the free software guild stands for.</p>
<p>Almost inevitably, the capitalists will win, and at some level the guild probably knows this. It is faced with two options.</p>
<p>It could chose to diminish and go into the west: remaining in existence but achieving an accommodation with the capitalists. This is pretty much what, for instance, the Anglican church (a descendant of another hugely powerful mediaeval institution) is doing: gradually relaxing all sorts of restrictions on things in order to avoid an outright confrontation with the rest of society. The Anglican church, in England (and its episcopal equivalents elsewhere in he UK) is now all but irrelevant in practical terms to most people: there are probably gay men who still worry that having sex with other men is ‘sinful’ but the number is diminishing, as one example. But it still exists, it still owns property, it still is involved in all sorts of ceremonial occasions.</p>
<p>Or the guild could choose to fight. It will lose, we must hope<sup><a href="#2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-5-definition" name="2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-5-return">5</a></sup>, but, like the nazis at the end of the second war, it will go out in a blaze of what its members consider to be glory. Because it is quite powerful, this fight will cause a great deal of damage: it will destroy the guild of course, but many people and organisations not directly involved in it will also be badly hurt. But, from the perspective of the more fundamentalist members of the guild, this is a war for their religion and a war which they are obliged, therefore, to fight. They must fight even though they know they will lose, and even though the damage caused to others and to society as a whole will be severe.</p>
<p>And so, although they know their cause is already lost, the guild has chosen to fight.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-1-definition" class="footnote-definition">
<p>And, given how hard it is for him to understand that paedophilia is wrong, perhaps girls as well. <a href="#2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-1-return">↩</a></p></li>
<li id="2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-2-definition" class="footnote-definition">
<p>They will, of course, deny this. But they also will defend the remarks RMS made about paedophilia as not being particularly problematic, and talk about how unreasonable it was for various people for whom Jeffrey Epstein procured underage girls to check that they had consented, or could consent, to what was being done to them. And, after all, why should someone who believes that consent is not required for him to have sex with a woman really have a problem with having sex with a child, who cannot consent? Remember that these views were <em>completely standard</em> until quite recently in many societies. <a href="#2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-2-return">↩</a></p></li>
<li id="2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-3-definition" class="footnote-definition">
<p>Note that this is not intended to reflect on the <em>staff</em> of The Register, or its editorial policy, merely on the demographic of some of its commentariat. <a href="#2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-3-return">↩</a></p></li>
<li id="2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-4-definition" class="footnote-definition">
<p>To be very clear: I am not against free software, which I believe has done a lot of good. <a href="#2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-4-return">↩</a></p></li>
<li id="2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-5-definition" class="footnote-definition">
<p>We, together with the plutocrats, must hope it will lose because we must hope that the populist parties which have gained so much ground in recent years are eventually defeated and that democracy does not give way to regimes which are explicitly white male supremacist. Those regimes would destroy both liberal democracy <em>and</em> plutocratic capitalism, just as the nazis did in 1930s Germany. The rape of the UK by the Johnsonist party (still known as the ‘Conservative party’ although it is no longer a conservative party) makes it clear that this is not a safe assumption. <a href="#2021-07-24-the-lost-cause-of-the-free-software-foundation-footnote-5-return">↩</a></p></li></ol></div>The idiocy of Marsurn:https-www-tfeb-org:-fragments-2021-06-18-the-idiocy-of-mars2021-06-18T08:17:24Z2021-06-18T08:17:24ZTim Bradshaw
<p>If you think that we can continue economic growth by simply moving to Mars, you’re a fool.</p>
<!-- more-->
<p>Many people do not understand that the growth in resource usage by humans will, if not stopped, result in us hitting the limits of what Earth can provide at some point in the fairly near future. Unless we address this problem the result will probably be the collapse of civilisation. Some of the people who think they <em>do</em> understand this problem argue that, well, there is Mars<sup><a href="#2021-06-18-the-idiocy-of-mars-footnote-1-definition" name="2021-06-18-the-idiocy-of-mars-footnote-1-return">1</a></sup>: we can just go there and carry on as normal and everything will be fine.</p>
<p>It won’t be, and here’s why.</p>
<h2 id="growth">Growth</h2>
<p>We all hear about <a href="https://en.wikipedia.org/wiki/Economic_growth">economic growth</a> in the news. And people like it when it’s some positive number. Growth:</p>
<blockquote>
<p>can be defined as the increase or improvement in the inflation-adjusted market value of the goods and services produced by an economy over time. [From Wikipiedia article above.]</p></blockquote>
<p>What that means is that growth is the rate of change of some measure of the size of an economy. Growth is measured as a percentage increase in the size of the economy per year, which I will call \(g\), so if at some time \(t\) (measured in years) the economy has size \(s(t)\), then \(s(t + 1) = s(t)(1 + g/100)\). That means that if growth is constant over the long term, the size of the economy is increasing exponentially with time:</p>
<p>\[
s(t) = s_0 e^{t/\tau}\quad\text{where $\tau = 1/\ln(1 + g/100)$}
\]</p>
<p>And economists are very keen that \(g\) should not drop to zero or, still worse, become negative: they want it to be some long-term constant value.</p>
<h3 id="rescaling">Rescaling</h3>
<p>One possibility is that this measure \(s(t)\) might simply involve rescaling the economy somehow: we think it’s bigger but in fact it’s not. Let’s say that I’m interested in buying aluminium: if, every year, the economy ‘grows’ by \(1 + g/100\), but the price of aluminium <em>also</em> grows by \(1 + g/100\) then I can’t actually buy any more at the end of the year, even though the economy has ‘grown’.</p>
<p>This, in fact, is inflation: the economy hasn’t grown, it’s just been rescaled. Inflation is not what people mean by growth: they mean that you can actually buy more stuff.</p>
<p>Well, if you can buy exponentially more stuff over time there’s a problem, isn’t there? Even economists can see this, I expect.</p>
<h3 id="hand-waving">Hand-waving</h3>
<p>So if growth means being able to afford ever more material goods there’s a problem: at some point you’ll run out of stuff. This is awkward for economists who have built entire theories on the idea that growth can continue indefinitely.</p>
<p>Is there a meaning of the term ‘growth’ which <em>doesn’t</em> involve crashing into finite limits or somehow finding an endless source of new material goods? One option that economists push is that we start being able to use the existing raw materials ever more efficiently. That’s fantasy in the medium term because there are hard physical limits on efficiency. Another popular option is that we all somehow fall into a simulation and live ever more complex virtual lives, while the real world carries on without us. That’s almost certainly <em>also</em> fantasy both because, despite endless AI hype, we have no idea how to do that, and because exponential growth in computing power also has hard physical limits<sup><a href="#2021-06-18-the-idiocy-of-mars-footnote-2-definition" name="2021-06-18-the-idiocy-of-mars-footnote-2-return">2</a></sup>: <a href="https://en.wikipedia.org/wiki/Moore%27s_law">Moore’s law</a> is only a transient phenonenon.</p>
<p>In these hand-waving cases there’s also a question about what happens to the prices of physical materials. For things to make any sense the prices have to inflate at least as fast as the growth rate, and in fact faster. If, for instance, the price of aluminium remained roughly constant (when corrected for currency inflation), then after some time it would be possible to simply buy all the aluminium in the world and thus hold to ransom anything which is made from it, however efficiently that is done. So that shows that the price of materials must rise at least as fast as growth. In fact it must rise faster than that: given the finite supply of aluminium ore, the real cost of aluminium should represent that, meaning that, even as people become richer, new physical goods must become more expensive and scarce over time.</p>
<p>But there’s no need to speculate: instead let’slook at what actually <em>has</em> happened.</p>
<h2 id="energy">Energy</h2>
<p>A good proxy for the processing and consumption of material goods is energy consumption: energy is consumed to do some useful physical work, so it should correlate approximately with the amount of materials being processed in some way. And energy is fungible, so it’s easy to measure. And data on energy usage is available. <a href="https://dothemath.ucsd.edu/2021/03/textbook-debut/">Tom Murphy’s excellent book</a>, <a href="https://escholarship.org/uc/energy_ambitions"><em>Energy and Human Ambitions on a Finite Planet</em></a> contains, in section 1.2, this information for the US, sourced from the <a href="https://www.eia.gov/totalenergy/data/monthly/index.php">US Energy Information Administration</a>. He uses this data to derive a rate of growth in US energy usage of about \(3\,\mathrm{\%/y}\) between about 1650 and 2000. So during this period growth in energy usage was approximately exponential and so, it’s pretty safe to say, there was exponential growth in the physical material being used during this period.</p>
<p>So at least until recently (see below) growth meant what it naïvely means: an exponentially increasing rate of material production. This obviously can’t continue on Earth.</p>
<h2 id="mars">Mars</h2>
<blockquote>
<p>But we can just go to Mars right? Once we’ve used up Earth we can up sticks, move planet, and carry on. It took us thousands of years to use up Earth’s resources, so Mars will buy us thousands more years.</p></blockquote>
<p>Or so say the innumerate space fantasists.</p>
<p>This kind of claim is so silly it’s hard to know where to start. But let’s just take it at face value. I will assume:</p>
<ul>
<li>it is possible to either move huge numbers of humans to Mars or to mine it for raw materials and bring them back to Earth cheaply, using spacecraft driven by some unexplained magic<sup><a href="#2021-06-18-the-idiocy-of-mars-footnote-3-definition" name="2021-06-18-the-idiocy-of-mars-footnote-3-return">3</a></sup>;</li>
<li>Mars has the same amount of raw materials as Earth (it doesn’t);</li>
<li>we can hit the ground running and simply immediately start stripping Mars at the rate Earth was being stripped;</li>
<li>any other kind of problem I haven’t thought of can be solved by yet more unexplained magic.</li></ul>
<p>So let’s do the maths.</p>
<h2 id="the-maths">The maths</h2>
<p>Let’s assume that we’re using up physical resources at some rate \(r(t)\) which is increasing exponentially:</p>
<p>\[
r(t) = r_0 e^{t/\tau}
\]</p>
<p>Where \(r_0\) is the rate at some time \(t = 0\). This can be integrated to get the total resources consumed to some time:</p>
<p>\[
R(t) = R_0 e^{t/\tau}\quad\text{where $R_0 =\tau r_0$}
\]</p>
<p>Here \(R_0\) is the total consumption to \(t=0\).</p>
<p>OK, so let’s measure \(t\) in years and assume that the annual growth percentage is \(g\). In other words:</p>
<p>\[
r(t + 1) = \left(1 + \frac{g}{100}\right)r(t)
\]</p>
<p>Well</p>
<p>\[
\begin{aligned}
r(t + 1) &= r_0e^{(t + 1)/\tau}\\
&= r_0e^{t/\tau}e^{1/\tau}\\
&= e^{1/\tau}r(t)
\end{aligned}
\]</p>
<p>so we can get \(\tau\) in terms of \(g\):</p>
<p>\[
\tau = \frac{1}{\ln\left(1 + \frac{g}{100}\right)}
\]</p>
<p>So, now, how long does it take for the resources consumed to go up by some factor, say \(k\)?</p>
<p>\[
\begin{aligned}
k = \frac{R(t + \Delta t)}{R(t)} &= \frac{e^{(t + \Delta t)/\tau}}{e^{t/\tau}}\\
&= e^{\Delta t/\tau}
\end{aligned}
\]</p>
<p>or</p>
<p>\[
\begin{aligned}
\Delta t &= \tau \ln k\\
&= \frac{\ln k}{\ln\left(1 + \frac{g}{100}\right)}
&&\text{using $\tau$ from above}
\end{aligned}
\]</p>
<h2 id="how-long-does-mars-get-us">How long does Mars get us?</h2>
<p>Let’s assume, as above, that at \(t=0\) we run out of resources on Earth and start mining Mars, and that we start doing it at the same rate that we were stripping Earth, and that Mars has the same amount of material as Earth, and that growth continues as before at a rate I will assume to be \(2\,\mathrm{\%/y}\) (so lower than the measured rate above). When do we run out of resources on Mars? Well, we run out of resources when \(R(t+\Delta t)/R(t) = k = 2\), so when</p>
<p>\[
\begin{aligned}
\Delta t &= \frac{\ln 2}{\ln\left(1 + \frac{g}{100}\right)}\\
&\approx 35\,\mathrm{y}
\end{aligned}
\]</p>
<p>Under entirely unrealistically optimistic assumptions, <em>stripping Mars will maintain growth at \(2\,\mathrm{\%/y}\) for 35 years</em>.</p>
<h2 id="what-about-venus-jupiter">What about Venus? Jupiter?</h2>
<p>If we make the same assumptions about Venus and start on that after Mars it gets us a further 20 years and six months. If instead we went to Jupiter, and assuming its resources scale like the ratio of its mass to Earth’s we’d buy about 256 years, which is better, but we’re not going to be able to do that.</p>
<p>So, growth of physical resource usage can not be maintained at \(2\,\mathrm{\%/y}\) for <em>any</em> significant amount of time in the future.</p>
<hr />
<h2 id="perhaps-good-news">Perhaps good news</h2>
<p>The good news here is that data since 2000 does make it look as if the growth in energy usage is slowing down. That means either we’re moving into one of the economists’ handwavy fantasy scenarios (hint: it doesn’t), or that we’re in the early stages of falling off the exponential phase of growth. Assuming that’s true then we’re moving into a world where the models economists have built simply no longer work, and where we can’t endlessly assume we will get richer for ever. It’s perhaps not coincidental that the years since 2010 have seen the rise of a number of extremely unavoury political movements: there will be more of these as there is more competition for increasingly scarce resources and as climate change takes effect, further increasing scarcity and driving migrations on vast scales. The likely outcome, I think, is not a smooth transition to a zero or negative growth world, but something pretty unpleasant: resource wars between major players, extreme racist responses to the migration problem, authoritarianism and fascism.</p>
<p>Some of these processes seem to be well on the way as I write.</p>
<hr />
<h2 id="some-pictures">Some pictures</h2>
<p>Here are plots which show the time to exhaust resources on:</p>
<ul>
<li>Mars (about \(35\,\mathrm{y}\)) & then Venus (another \(20.5\,\mathrm{y}\), exhausting both in about \(55\,\mathrm{y}\));</li>
<li>Jupiter alone (about \(291\,\mathrm{y}\));</li>
<li>the Sun alone (about \(642\,\mathrm{y}\)).</li></ul>
<p>These assume that available resources scale like mass, and that growth continues at \(2\,\mathrm{\%/y}\). Note that time is the y-axis in these plots: the x-axis is the resource ratio (assumed to be the mass ratio) compared to Earth’s.</p>
<div class="figure"><img src="/fragments/img/2021/mars-idiocy/mars-venus.svg" alt="Time to exhaust resources on Mars & Venus, growth at 2%/y" />
<p class="caption">Time to exhaust resources on Mars & Venus, growth at 2%/y</p></div>
<div class="figure"><img src="/fragments/img/2021/mars-idiocy/jupiter.svg" alt="Time to exhaust resources on Jupiter, growth at 2%/y" />
<p class="caption">Time to exhaust resources on Jupiter, growth at 2%/y</p></div>
<div class="figure"><img src="/fragments/img/2021/mars-idiocy/sun.svg" alt="Time to exhaust resources from the Sun, growth at 2%/y" />
<p class="caption">Time to exhaust resources from the Sun, growth at 2%/y</p></div>
<hr />
<div class="footnotes">
<ol>
<li id="2021-06-18-the-idiocy-of-mars-footnote-1-definition" class="footnote-definition">
<p>Or Venus, but usually Mars. Sometimes asteroids. <a href="#2021-06-18-the-idiocy-of-mars-footnote-1-return">↩</a></p></li>
<li id="2021-06-18-the-idiocy-of-mars-footnote-2-definition" class="footnote-definition">
<p>It is possible to imagine a world where the simulation we are assumed to end up in runs exponentially slowly, giving the consciousinesses in it the idea that growth still continues when in fact it doesn’t. <a href="#2021-06-18-the-idiocy-of-mars-footnote-2-return">↩</a></p></li>
<li id="2021-06-18-the-idiocy-of-mars-footnote-3-definition" class="footnote-definition">
<p>Here’s an idea: if you have unexplained magic to drive your vast fleets of spacecraft, <em>you’ve already solved the problem on Earth</em>! <a href="#2021-06-18-the-idiocy-of-mars-footnote-3-return">↩</a></p></li></ol></div>Field camerasurn:https-www-tfeb-org:-fragments-2021-05-11-field-cameras2021-05-11T18:39:47Z2021-05-11T18:39:47ZTim Bradshaw
<p>A comment by my friend, whose <em>nom de guerre</em> is Zyni Moë, reproduced with her permission. Note that Zyni’s first language is not English.</p>
<!-- more-->
<p>Most people are confused about field cameras. They think are best at driving to some scenery pretending to be Ansel Adams except not as good (not actually sure how good he was now, certainly can’t look at his pictures any more). Perhaps in 1990 this was true: today if you actually wanted to copy Adams you would use some digital camera, perhaps Sigma Quattro with Foveon in fancy-high-res mode, still a lot faster than a field camera, image quality better and even with that camera you can take 30 or 100 pictures in the time you can take one with the wooden box.</p>
<p>Completely wrong use for such a camera in 2020. What is the right use? That is easy: street camera. If you want to take street portraits in 21st century no camera is better than a field camera.</p>
<p>You walk around with some official anointed ‘street camera’ (small, expensive, recognisable) then people notice you because it is not any more 1950 and people are aware of cameras now. And they know you are trying to steal their photograph and, mostly, they don’t like that. If it is the most anointed kind of ‘street camera’ they will notice it even more (anyone who thinks these cameras are discreet in any way has not carried one much) and they know that you are not only trying to steal their photographs, you are almost certainly richer than them. People like even less than the stealing of photographs the stealing of photographs by rich men (always it is men).</p>
<p>Instead you can walk around with a wooden box on a tripod and a bag of rattling bits. No-one, ever, refuses to have their picture taken because it is so interesting and strange. Better, offer them a print in return for their picture: now they give you something and you give them something in return. Yes you do not get the same pictures you would with your pretend-discrete camera: you will not get pictures any one of ten thousand thousand people would take, mostly better than you. You will instead get more interesting pictures, pictures only a few hundred people could take better than you and not many even will try.</p>
<p>Of course you have to walk carrying this huge thing over your shoulder and if you are not so rich and can’t afford a fancy carbon tripod it will be heavy. But humans are good at walking if they will only try.</p>
<p>Well I have not done this but my friend has: is how I met him in fact. I have the print which I value above most things, and not just because he made it.</p>
<hr />
<p>This was originally a comment to <a href="https://theonlinephotographer.typepad.com/the_online_photographer/2020/10/how-to-choose-a-4x5.html">this</a>.</p>Carbon offsetsurn:https-www-tfeb-org:-fragments-2021-04-15-carbon-offsets2021-04-15T11:18:31Z2021-04-15T11:18:31ZTim Bradshaw
<p>People attacking carbon offsets or net zero emissions are attacking the wrong target and harming their cause. The problem is that the things we call ‘carbon offsets’ are not carbon offsets and ‘net zero’ is not net zero: they are lies. <em>That’s</em> what they should attack.</p>
<!-- more-->
<p>There’s nothing wrong with carbon offsets or net zero. In fact something like them really has to happen if anyone ever wants to do anything which is not in itself carbon-neutral. If you, say, want to buy a bike made of metal then the production of that bike more-or-less certainly released some carbon into the atmosphere, because smelting metal ore does that, even when you source the energy for the smelting from non-fossil sources, as you use coke to pull oxygen out of the ore & releasing \(\mathrm{CO_2}\) into the atmosphere. Perhaps it’s possible to make metal production carbon-neutral (probably it isn’t) but it’s not possible to make <em>everything</em> carbon-neutral. If you eat meat or dairy products then that’s <em>definitely</em> not carbon-neutral. <em>Breathing</em> is not carbon-neutral.</p>
<p>Perhaps it would be possible to make everything carbon-neutral in detail: steelmakers could also become planters of trees, every cattle farmer would grow enough arable to offset their carbon & methane emissions, everyone who breathed would spend some time planting things (so, no cities in this world) and so on. That’s … not a world many people want to live in: it’s pretty close to a mediaeval world where almost everyone has to spend a certain amount of time working on the land.</p>
<p>Instead you do it by some kind of carbon offset. The metal-refiner arranges for someone else to remove as much (or more, since the world really needs to be carbon-<em>negative</em> for quite a long time) carbon from the atmosphere as they emit into it. And one way of doing that is to just have a price (it’s not a fine, it’s actually a price!) for carbon (you could also do it by administrative fiat: it doesn’t matter in terms of the carbon but I can’t see the US doing that). Then every process does not need to be in-detail carbon-neutral (or carbon-negative for a long time to come) and you can ship the carbon-neutrality around so that the whole <em>system</em> is carbon-neutral (negative). And you can have cities again, which is great. Net zero is not a bad term and neither is carbon offset: it’s the only way you get to do it unless you want to live in the 14th century.</p>
<p>That’s not the problem with ‘carbon offsets’ (note the quotes) as they currently are. There are two problems: firstly carbon emission is substantially underpriced, which requires worldwide cooperation, which isn’t going to happen, to fix; and secondly that they’re just a con: ‘carbon offsets’ are not, in fact, carbon offsets.</p>
<p>It’s very like what happened in the years leading to 2008: there was a shitload of risk in crappy loans which was going to cause some horrible problem, but everyone liked the crappy loans because they got to charge high interest rates on them and get rich. So a bunch of people with physics degrees (I have a physics degree) waved their hands and did some magical mixing of this risk with lower risk which made it seem to go away, so there could be more crappy loans and more risk which in turn could be magicked away. Except they weren’t very good physicists because they didn’t understand that there are conservation laws here and you don’t get to cheat those laws any more than you get to cheat conservation of momentum: they’re just as basic. I don’t completely understand the conservation laws because they have to do with the correlation of risks — you can mix a number of high risks and end up with a lower one if the underlying risks are uncorrelated — and I can’t hack the statistics: but they’re there. Well, in fact, at least some of the people who did this almost certainly knew exactly what was going on, but money had destroyed their morals. And the consequence of this was financial collapse in 2008 and brexit, Trump and creeping fascism a few years later.</p>
<p>‘Carbon offsets’ do a similar trick: you’re buying, way too cheaply, some token which says that the carbon you source into the atmosphere will be sunk from it by someone else else, but either that never actually happens, or it only happens at some unspecified future date, which is the same thing. A conservation law — this time an easy one which is conservation (or reduction) of \(\mathrm{CO_2}\) in the atmosphere as a consequence of your actions — is being violated, again. And this is violation is done by a similar trick to what happened all obfuscated by some complex process of mixing the ‘offsets’ together such that it’s not apparent that there’s nothing (or some unspecified future thing which is the same as nothing) at the far end of the network of obfuscation. It’s an elaborate shell game by which ‘carbon offsets’ are turned into money by the players and no carbon is actually removed from the atmosphere.</p>
<p>So, I wish that people would attack the right thing: saying ‘net zero is bad’ is <em>wrong</em> because net zero (net negative) is <em>all we can ever realistically do and a completely satisfactory solution to the problem</em>. Saying ‘carbon offsets are bad’ is wrong because carbon offsets are a perfectly reasonable approach to achieving net zero. The problem is not net zero or carbon offsets, it’s that <em>the things we call ‘carbon offsets’ are not actually carbon offsets, and what we call ‘net zero’ is not actually net zero</em>.</p>
<p>And as long as people attack the wrong problem they won’t solve the real one. I mean, we’re obviously not going to solve the real one anyway, but we might as well at least <em>try</em>. It’s the same idiocy as ‘greens’ attacking nuclear power because spooky frightening or people attacking vaccines because tiny risk of clotting from vaccine is somehow more frightening than really fucking big risk of dying, horribly, of CV19.</p>Useful idiotsurn:https-www-tfeb-org:-fragments-2021-04-09-useful-idiots2021-04-09T13:24:38Z2021-04-09T13:24:38ZTim Bradshaw
<p>The authors of the Signal messaging system are acting as useful idiots for state security and police services: while they are almost certainly not working for them or funded by them, what they are doing is extremely convenient for them.</p>
<!-- more-->
<p>There is a <a href="https://yasha.substack.com/p/signal-is-a-government-op-85e">conspiracy theory</a> that <a href="https://signal.org/">Signal</a> is in fact created by some state security service: this is pretty obviously silly. Instead, I think that the people who create and endorse Signal are acting as <em>useful idiots</em> for various state security and police services.</p>
<blockquote>
<p><strong>useful idiot</strong>, noun
<br />a naive or credulous person who can be manipulated or exploited to advance a cause or political agenda</p></blockquote>
<h2 id="the-art-of-the-possible">The art of the possible</h2>
<p>The people who work for state security and police services, unlike their political masters, understand cryptography. And in particular they understand that the mathematics of cryptography makes it effectively impossible to stop people from using cryptographic communication systems which can not usefully be broken. The only ways this could be prevented would be either to forbid people access to general-purpose computers, which is not practical, or to ensure that all such computers are compromised at a low level which is also not practical<sup><a href="#2021-04-09-useful-idiots-footnote-1-definition" name="2021-04-09-useful-idiots-footnote-1-return">1</a></sup>.</p>
<p>In other words they understand that people will be able to communicate with each other in such a way that this communication can not be overheard in bulk, and that there is nothing they can do about that.</p>
<p>What they <em>can</em> do is to compromise <em>individual</em> communication links: once they’ve worked out that, for instance, two people who are of great interest to them are talking to each other they can work to compromise the systems that these people are using to communicate — installing things like key-loggers, rootkits or both, which will sniff the communications before they are encrypted. Doing this is a lot of work and probably requires a significant amount of traditional tradecraft: by far the easiest way to do it will be by gaining physical access to the devices they want to compromise and doing so without arousing suspicion, for instance.</p>
<p>Their difficulty, then, is filtering the people that they want to overhear sufficiently badly from the huge mass of people that they don’t care about. This is where Signal comes in.</p>
<h2 id="useful-idiots">Useful idiots</h2>
<p>Signal is a tool which allows encrypted communication between individuals and groups. There is no reason to believe that this communication can be broken.</p>
<p>But Signal has been <a href="https://www.tfeb.org/fragments/2021/01/16/what-s-wrong-with-signal-s-contact-discovery/" title="What's wrong with Signal's contact discovery">designed in such a way that it is inherently unsafe</a>: it uses phone numbers for identifiers and its contact discovery works in such a way that anyone who knows <em>your</em> phone number can know if you are a Signal user, whether or not you know <em>their</em> phone number. This approach means that if you have Signal installed then you will get a notification whenever anyone who is in your phonebook installs Signal, <em>whether or not you are in their phonebook</em>. This was done intentionally, and presumably as an attempt to drive growth in users with the eventual aim of making money from the large userbase.</p>
<p>This makes Signal a seriously bad choice for, for instance, people who are suffering abuse or being stalked. The moment you install Signal in order to talk to someone who might help you, the person you are being abused by or who is stalking you can know this, and you won’t know that they know.</p>
<p>On the other hand this is very convenient for state security and police services. They don’t care about the cryptographic security because they know that people can use tools which they can’t attack. But finding someone’s phone number (all someone’s phone numbers) is a pretty easy thing to do if you’re a state security or police service, and Signal’s contact discovery then means that they can silently trawl through people they might be interested in and work out who has Signal installed.</p>
<p>What this means is that, assuming Signal tends to be used by people who really do have something to hide<sup><a href="#2021-04-09-useful-idiots-footnote-2-definition" name="2021-04-09-useful-idiots-footnote-2-return">2</a></sup> it works as a filter which allows state security and police services to identify people who are likely to be of interest to them from larger lists of people.</p>
<h2 id="the-coronation-of-the-idiots">The coronation of the idiots</h2>
<p>Until recently it has been rather unclear how Signal’s authors intend to use the product to attempt to make themselves very rich. Well, they’ve just answered that question: they are going to <a href="https://signal.org/blog/help-us-test-payments-in-signal/" title="glue a cryptocurrency into it">glue a cryptocurrency into it</a>, so it will be possible to make anonymous payments to and from Signal. Conveniently <a href="https://www.stephendiehl.com/blog/signal.html" title="Moxie Marlinspike / MobileCoin">Signal’s authors have an ownership stake in the cryptocurrency involved</a>: something which should not be very surprising<sup><a href="#2021-04-09-useful-idiots-footnote-3-definition" name="2021-04-09-useful-idiots-footnote-3-return">3</a></sup>.</p>
<p>So Signal’s authors have now revealed their proposed solution to their underpants gnome problem: they intend to make money from Signal by making money from the transactions people make using it. <a href="https://www.schneier.com/blog/archives/2021/04/wtf-signal-adds-cryptocurrency-support.html" title="Bruce Schneier, another useful idiot">Lots of people</a> have been saying that this is a bad idea: why entangle a messaging system with a payment system? Well, they’re just not thinking very hard about this because the answer is terribly simple: they are being entangled so Signal’s authors can make money.</p>
<p>So, what kind of person would be particularly interested in a tool which allows encrypted communication (with disappearing messages, even), and allows anonymous, secure payments? People who deal in illegal goods would be. If you’re dealing in illegal drugs, or illegal pornography, or anything similar, Signal will soon look like a tool designed especially for you.</p>
<p>But, really, it turns out to have been designed for someone else. If you are a state security or police service, soon you will be able to look at a list of people who you suspect may be dealing in illegal goods, use Signal’s contact discovery to find the people who have it installed, and now you have a shorter list of people who are much more likely to be of interest to you.</p>
<p>Signal is the tool that state security or police services would have built, but they didn’t have to do so: some useful idiots built it for them.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2021-04-09-useful-idiots-footnote-1-definition" class="footnote-definition">
<p>It is, inevitably, the subject of other conspiracy theories. <a href="#2021-04-09-useful-idiots-footnote-1-return">↩</a></p></li>
<li id="2021-04-09-useful-idiots-footnote-2-definition" class="footnote-definition">
<p>Rather than the sort of people who wear ‘tactical’ watches so they can pretend they are in the special forces. <a href="#2021-04-09-useful-idiots-footnote-2-return">↩</a></p></li>
<li id="2021-04-09-useful-idiots-footnote-3-definition" class="footnote-definition">
<p>It does at least appear that <a href="https://www.mobilecoin.com/" title="MobileCoin">MobileCoin</a>, the cryptocurrency Signal will use, does not use Bitcoin’s ‘proof of work’ approach which is currently causing significant carbon emissions. <a href="#2021-04-09-useful-idiots-footnote-3-return">↩</a></p></li></ol></div>How the backtrace was conqueredurn:https-www-tfeb-org:-fragments-2021-03-26-how-the-backtrace-was-conquered2021-03-26T11:37:22Z2021-03-26T11:37:22ZTim Bradshaw
<p><strong>Once upon a time</strong>, when the world was younger, a young and rather foolish physics student used to debug his FORTRAN programs using printed backtraces.</p>
<!-- more-->
<p>And I do mean printed backtraces: when the machine crashed the chain printer attached to it would vomit out many sheets of paper which had procedure names and line numbers on them. And, after restarting the machine so the next user could make it crash in their turn he would take this printout and take his printout of his program and compare the line numbers: looking at the code, trying to work out what had gone wrong and marking corrections in pencil. He spent many hours late at night in this way.</p>
<p>Later on, this same student (now a maths student) discovered a wonderful thing: a programming language called Lisp in which you could write programs to solve complex algebra problems which were of interest in his field. And although, in theory, if you had the kind of computer which maths departments could not afford, Lisp was an <em>interactive</em> language, this was not true in practice if all you had was the kind of computer that was all a maths department could afford. So things went on much as before: he would make some changes to his program, set up the equations it was going to try to solve, and then, late at night when there were no other users to inconvenience, set it off running. In the morning there might occasionally be a solution, and even more occasionally a solution which was useful. But more often there would be only the corpse of the program in the form of an elaborate backtrace after it had been mortally wounded by some fierce bug (error handling was a thing not yet thought of, at leasy by the student). This time, though, the backtrace would be in a log file from the run.</p>
<p>And the student made another discovery: there was a certain text editing environment used by some far-off people who had access to much bigger and better computers, and this editing environment purported to support Lisp programming rather well: certainly better than the rudimentary editor he used then. And he managed to get a copy of this environment (legend has it it was version 17.64) on a tape from someone, and he managed to make it run, just, on the maths department’s machine. And he taught it enough about the Lisp dialect he was using that it was indeed helpful, if often annoying to other users as it took rather a lot of the capacity of the machine to support it. And everything was a little better.</p>
<p>And this text editing system came with a rather wonderful tool: a program whose name may have been ‘tags’ which would, for the languages it understood, make a file which mapped between definition names and their locations in the filesystem. And he modified this tags program to understand the dialect of Lisp he was using as well. Very wonderfully, the system would also cope with the case where the definition had moved, which it almost always had, and which made things like line and column numbers so brittle and useless (source control might have been invented by then, but the student knew nothing of that). This, of course, was the one of the primitive ancestors of the automatic systems which will find definitions of symbols that any reputable editor, and even some that are perhaps a little less than reputable, now has.</p>
<p>And now, when he came in in the morning to find a new backtrace from the previous night’s run, he would edit this backtrace in the editing system and find interesting lines in it, at which he would type the very wonderful ‘meta-dot’ or, as he knew it (not being blessed with a keyboard with a meta key), ‘escape dot’ command. And the disk light would come on for a little, and then he would be looking at the definition he was interested in.</p>
<p>Thus was the backtrace conquered. And from that day to this it has never dared raise its head again in polite company, but instead lurks, unheeded except by the few who now remember it, in the darker corners of the system. As for the student, well, no-one now remembers him at all.</p>Richard Stallmanurn:https-www-tfeb-org:-fragments-2021-03-24-richard-stallman2021-03-24T11:24:44Z2021-03-24T11:24:44ZTim Bradshaw
<p>Richard Stallman (RMS) is a famous hacker who wrote Emacs and founded the Free Software Foundation and the GNU project. He is an important figure in the history of free software. He is also someone whose behaviour towards women has been appalling and who believed, for a long time, that sex with children was not harmful: he is someone who should have no place in the present or future of free software, at all. And yet he is vociferously defended by a significant number of free software advocates: this says exactly what you think about them.</p>
<!-- more-->
<p>[What follows is wrong in some important ways: please see <a href="https://www.tfeb.org/fragments/2021/08/17/neurodivergent/">this article</a> which has both corrections and an apology.]</p>
<p>There are many well-attested examples of RMS’s grotesque attitudes to women<sup><a href="#2021-03-24-richard-stallman-footnote-1-definition" name="2021-03-24-richard-stallman-footnote-1-return">1</a></sup>. Here is an example <a href="https://stallman.org/archives/2006-mar-jun.html#05%20June%202006%20%28Dutch%20paedophiles%20form%20political%20party%29" title="Richard Stallman's personal political notes from 2006: March - June">from his own blog in June 2006 (updated April 2018)</a>, of his attitude to something else:</p>
<blockquote>
<p>I am skeptical of the claim that voluntarily [sic] pedophilia harms children.</p></blockquote>
<p>Yes, you are reading that correctly: RMS thought, in 2006 (he was 53), that <em>adults having sex with children</em> was OK, so long as it was, you know, ‘voluntary’: so long as the children consented. Because children, in his view at the time, <em>could consent to sex</em>.</p>
<p>In other words, in 2006 (and for many years following that) RMS was someone who did not understand, even slightly, what it means to be able to consent to sex (or who understood but did not care). How do you think he treated women?</p>
<p>Thirteen years later, on <a href="https://stallman.org/archives/2019-jul-oct.html#14_September_2019_(Sex_between_an_adult_and_a_child_is_wrong)" title="Richard Stallman's personal political notes from 2019: July - October">14th September 2019</a> and at the age of 66, he retracted this:</p>
<blockquote>
<p> Many years ago I posted that I could not see anything wrong about sex between an adult and a child, if the child accepted it.</p>
<p>Through personal conversations in recent years, I’ve learned to understand how sex with a child can harm per [sic<sup><a href="#2021-03-24-richard-stallman-footnote-2-definition" name="2021-03-24-richard-stallman-footnote-2-return">2</a></sup>] psychologically. This changed my mind about the matter: I think adults should not do that. I am grateful for the conversations that enabled me to understand why.</p></blockquote>
<p>If we assume that he is writing in good faith (and I have no reason to believe otherwise), then I think there is only one conclusion to draw from this: something is badly wrong with his mind which makes it extremely hard for him to understand notions such as consent, and probably other things as well. Perhaps he is someone who deserves sympathy, not contempt. But, like other people who suffer from such problems, he needs to be kept out of situations where he can do harm.</p>
<p>Unfortunately he is not being kept out of such situations: rather he is being supported and enabled by a group of acolytes, for their own reasons which are certainly not good ones.</p>
<p>I’ve known since the early 1990s that cooperation with RMS was impossible, because I was a bystander on the right mailing lists and I saw the mail exchanges<sup><a href="#2021-03-24-richard-stallman-footnote-3-definition" name="2021-03-24-richard-stallman-footnote-3-return">3</a></sup>. I had no idea, at all, about this stuff (which is unforgivable: I should have known, even though I left the cult in about 1994). As I said above, almost certainly he is ill rather than evil: there is simply something which does not work properly in his mind which makes him unable to understand these things. If this is true then it is very sad for him. However like other similar people he is still a danger to those around him: a man who finds it hard to understand that sex with children is wrong should be nowhere near any kind of leadership position, in anything, and should never have been so.</p>
<p>But, of course, instead of that, his acolytes and fanboys have built an elaborate halfwit cargo cult around him for more than 30 years. Many of them are now so blinded by the cult that they have made that they simply can no longer see, if they could ever see, that the little tinpot god they have built it around is damaged, if not evil. And so they will drink the kool-aid and end up, with the other cult members, still praising the idiot toy god they made even as the building burns around them<sup><a href="#2021-03-24-richard-stallman-footnote-4-definition" name="2021-03-24-richard-stallman-footnote-4-return">4</a></sup>.</p>
<p>And I have some limited sympathy for them, as I have some provisional sympathy for him, as I have sympathy for other damaged people. But the cult built around him does great harm and RMS directly does great harm, and it needs to stop. It needed to stop 30 years ago but it needs to stop even more now.</p>
<p>On the other hand I don’t have sympathy for the many others: the people whose minds have not been damaged by the cult, the people who think that RMS’s attitudes are just fine nonetheless. The people who knowingly cheer on this damaged human being because he represents <em>their</em> views: their views towards women and perhaps their views towards children too. The people who think that the women and other people who are offended by the grotesque attitudes of RMS, and of many others in the free software community, are <a href="https://forums.theregister.com/forum/all/2021/03/23/fsf_stallman_outcry/#c_4226763" title="'radfems'">‘radfems’</a>. If you are one of those people then fuck you: fuck all of you.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2021-03-24-richard-stallman-footnote-1-definition" class="footnote-definition">
<p>Since writing the first version of this post, I’ve exchanged email with a friend of mine who describes having been ‘accosted’ by him at an event some time ago. Her native language is not English and she pretty clearly did not want to discuss it further, but I don’t believe she meant she was physically assaulted but rather that she was approached inappropriately. <a href="#2021-03-24-richard-stallman-footnote-1-return">↩</a></p></li>
<li id="2021-03-24-richard-stallman-footnote-2-definition" class="footnote-definition">
<p>‘Per’ seems to be a third-person-neutral pronoun used by people who don’t understand that ‘they’ has been used that way for hundreds of years in English. <a href="#2021-03-24-richard-stallman-footnote-2-return">↩</a></p></li>
<li id="2021-03-24-richard-stallman-footnote-3-definition" class="footnote-definition">
<p>And I’m a Lisp hacker: I have low standards for cooperation. (What’s the difference between a Lisp hacker and a terrorist? You can negotiate with a terrorist.) <a href="#2021-03-24-richard-stallman-footnote-3-return">↩</a></p></li>
<li id="2021-03-24-richard-stallman-footnote-4-definition" class="footnote-definition">
<p>Yes, I know I’m mixing my cults here. <a href="#2021-03-24-richard-stallman-footnote-4-return">↩</a></p></li></ol></div>What's wrong with Signal's contact discoveryurn:https-www-tfeb-org:-fragments-2021-01-16-what-s-wrong-with-signal-s-contact-discovery2021-01-16T11:35:36Z2021-01-16T11:35:36ZTim Bradshaw
<p>After WhatsApp’s threatened change to their terms of service, which may allow them to leak information to Facebook, many people are moving to Signal, a tool which purports to be more secure. If you want security which is not at least partly theatrical you should not use Signal.</p>
<!-- more-->
<h2 id="whatsapp">WhatsApp</h2>
<p>On or about the 6th of January 2021, <a href="https://www.bbc.co.uk/news/technology-55573149" title="terms of service">WhatsApp users were required to agree to new terms of service</a> or to stop using the service by the 8th of February. These terms of service were at best confusing, but given that WhatsApp is owned by Facebook, a company whose entire business model is selling its users’ souls to its customers and which has been heavily implicated in that other thing that happened on the 6th of January 2021, the conclusion was not likely to be good.</p>
<p>I’m glad to say this seems to have been a disaster for WhatsApp: so many users changed to <a href="https://signal.org/" title="Signal">Signal</a> — an app which sells itself as being more secure — that it <a href="https://www.bbc.co.uk/news/technology-55684595" title="Signal falls over">fell</a> <a href="https://www.theregister.com/2021/01/15/signal_app_down/" title="Signal falls over">over</a> under the load for a while on the 15th of January. People are apparently <a href="https://www.bbc.co.uk/news/technology-55634139" title="flocking to rival platforms">leaving WhatsApp in droves</a>, and moving to Signal and other platforms.</p>
<p>WhatsApp / Facebook were so alarmed by this that they’ve both issued a number of <a href="https://www.theverge.com/2021/1/12/22226792/whatsapp-privacy-policy-response-signal-telegram-controversy-clarification" title="clarifications">clarifications</a>, <a href="https://www.bbc.co.uk/news/technology-55683745" title="delayed the implementation date">delayed the implementation date until the 15th of May</a> — probably in the hope that people will have forgotten by then — and made clear that the changes <a href="https://twitter.com/markscott82/status/1346817693375229952" title="not in Europe">do not apply in Europe</a>, where there are reasonable privacy laws, and not even, yet, in the UK which has not yet completed its transition to Boris Johnson’s hereditary feudal fiefdom.</p>
<p>So that’s, perhaps, good, right? Lots of people were driven to Signal which is ever so much more secure and written and run by very nice people who understand and care about security.</p>
<h2 id="signal">Signal</h2>
<p>Well, the people who wrote Signal and run its infrastructure care about their users’ security only as far as it suits them. Yes, they make <a href="https://signal.org/#signal" title="a great deal of noise">a great deal of noise</a> about how secure and safe it is: their website is covered in quotes from people like Edward Snowden and Bruce Schneier and generally makes a very big deal about the security of the platform. If you don’t read what they write quite carefully you could be forgiven for thinking that Signal was completely safe, and completely private.</p>
<p>It’s not. And it’s not safe <em>by design</em>: the Signal people know it is not safe, <em>and they don’t care</em>.</p>
<h2 id="signals-contact-discovery">Signal’s contact discovery</h2>
<p>Here is a sketch of how contact discovery works in Signal. If you are a Signal user you have some identity on the system, and that identity is derived from your phone number. In particular, if you know the phone number you can work out the identity<sup><a href="#2021-01-16-what-s-wrong-with-signal-s-contact-discovery-footnote-1-definition" name="2021-01-16-what-s-wrong-with-signal-s-contact-discovery-footnote-1-return">1</a></sup>. If you allow Signal access to your contacts (which it will ask you for), then every once in a while it will work out something equivalent to identities corresponding to your contacts, upload them, ephemerally, to Signal’s infrastructure, and compute the intersection. Once it’s done that, you know which of your contacts have Signal.</p>
<p>There are several obvious problems with this approach. The most obvious of these is that if any of the data on your contacts leaks, even in encrypted form — if someone attacks Signal’s infrastructure, or if Signal themselves are not trustworthy, say — then it is, obviously, a bad thing. And Signal have gone to heroic lengths to protect against this. Here is their initial outline of what it does (the following text comes from the link below):</p>
<blockquote>
<p>Private contact discovery using SGX is fairly simple at a high level:</p>
<ol>
<li>Run a contact discovery service in a secure SGX enclave.</li>
<li>Clients that wish to perform contact discovery negotiate a secure connection over the network all the way through the remote OS to the enclave.</li>
<li>Clients perform remote attestation to ensure that the code which is running in the enclave is the same as the expected published open source code.</li>
<li>Clients transmit the encrypted identifiers from their address book to the enclave.</li>
<li>The enclave looks up a client’s contacts in the set of all registered users and encrypts the results back to the client.</li></ol></blockquote>
<p>There is <a href="https://signal.org/blog/private-contact-discovery/" title="Signal's private contact discovery">much more description</a> of this. And it’s all fine: it really does go to very great lengths to make it very hard for Signal themselves or any other malicious actor who might be able to compromise their systems to gain access to your contacts, and still less to your messages. And that’s all very wonderful.</p>
<p>Now you’re probably expecting me to spout some conspiracy theory about how the SGX enclaves themselves have been compromised at the hardware level by some state-level entity, possibly with a three-letter name, so everything is worthless. Well, there have been rumours that that sort of thing has happened, certainly. But, well, they probably haven’t happened: the conspiracy theories probably are just conspiracy theories as they usually are. Even if they have happened, defending against state-level entities, with or without three-letter names, is generally futile: if these people are interested enough in what’s on your phone they probably will find out, either by fancy technology or by more traditional techniques, possibly involving a rubber hose.</p>
<p>No, that’s not the problem. The problem is laughably simpler than that.</p>
<h2 id="alice-and-elizabeth">Alice and Elizabeth</h2>
<p>Let’s imagine two people: Alice and Elizabeth, her partner. Alice is physically violent towards Elizabeth who lives in serious fear of her, is regularly being beaten by her and is terrified that worse things will happen soon. Elizabeth desperately wants and needs to escape from the relationship before something really bad happens, but she doesn’t know how: she needs to talk to someone privately. Alice, needless to say, doesn’t want this to happen.</p>
<p>Elizabeth realises that she can install Signal on her phone and then use it to communicate, privately, with people who might be able to help her — the police, perhaps. She does so.</p>
<p>Unbeknownst to her Alice already has Signal, perhaps on a phone the number of which Elizabeth does not know. Signal’s contact discovery promptly tells Alice that Elizabeth has installed Signal, and since she’s running it on a phone which doesn’t appear in Elizabeth’s contacts, Elizabeth doesn’t know this. And this story ends with Alice beating Elizabeth to death.</p>
<h2 id="vladimir-and-the-dissidents">Vladimir and the dissidents</h2>
<p>Or let’s imagine Vladimir. Vladimir runs a country which was once, briefly, a democracy but now, once more and inevitably, is a kleptocracy and a police state. Many, many people in Vladimir’s country don’t like him: his problem is knowing which ones to have dealt with. Well this is easy. Vladimir extracts from the telephone companies the phone numbers of the people he’s interested in — either with bribes or with pliers, it does not matter which. He then buys a burner phone, puts all these numbers in its contact list, and installs Signal. Now he knows which of his enemies have Signal, but since his burner phone is most certainly not in their contact lists they have no idea that he knows they have it and thus cannot run. Doors are knocked on at 3 in the morning, people vanish, their assets are acquired by Vladimir who uses them to build another vast, tasteless palace.</p>
<h2 id="unsafe-at-any-speed">Unsafe at any speed</h2>
<p>What Signal have done is to produce a beautifully secure implementation of a contact discovery algorithm which is <em>designed to be unsafe</em>, because it allows anyone who knows your phone number to know whether you have Signal, and if you don’t know <em>their</em> phone number — if they are, for instance, stalking you — it will not, and <em>can not</em>, tell you that they know this. The contact discovery algorithm is <em>designed</em> to leak information.</p>
<p>And they know this, and they don’t care. I’ll repeat that: they know that their product enables stalking, and they do not care about that.I don’t know why they made these choices, but I don’t expect the reasons are very good ones.</p>
<h2 id="some-ideas-which-are-mostly-useless">Some ideas which are mostly useless</h2>
<p>It’s tempting to say that, well, the contact discovery algorithm should be <em>mutual</em>: it should only tell me that you have Signal if both you are in my contacts list and I am in yours. That can’t work, because the only way to do this would be to allow my contact list (in encrypted form) to persist, indefinitely, on Signal’s infrastructure, which would leave it open to attack.</p>
<p>Another approach would be to have a bit you could set on your identity which says ‘this identity should not partake in contact discovery’: if it was set then Signal would not allow either it to be discoverable or it to discover others, with the second restriction existing to prevent people deliberately setting it so they could stalk other people while not themselves being discoverable. This is closer to working: it protects against users of the service, but it does not protect against people who can acquire its data: they can simply strip the privacy bits from the identities they’ve captured and run contact discovery on their own copy of the infrastructure.</p>
<p>Strangely, something which should make Signal’s stalking problem less serious is Facebook’s catastrophic misjudgement over WhatsApp’s privacy policy: large numbers of users have migrated from WhatsApp to Signal or, at least, have <em>installed</em> Signal and thus now have identifiers in the system. Stalking someone by discovering they have Signal installed now tells you a lot less about them than it did previously. Of course Elizabeth has Signal, and Vladimir may discover that both his real and potential enemies also have it<sup><a href="#2021-01-16-what-s-wrong-with-signal-s-contact-discovery-footnote-2-definition" name="2021-01-16-what-s-wrong-with-signal-s-contact-discovery-footnote-2-return">2</a></sup>. This makes things, at least, less bad, although it does not make them good.</p>
<h2 id="one-idea-which-is-not-useless">One idea which is not useless</h2>
<p>The underlying problem is that Signal uses phone numbers as identifiers, where phone numbers are essentially public information. This enables stalking and worse.</p>
<p>Well, instead, the system could use completely randomly created identifiers which were not tied in any way to phone numbers. This would make the users of the system completely anonymous: the only way you could discover someone’s identifier is if they gave it to you. For added value it might be made, optionally and not by default, possible to attach things like phone numbers and email addresses to the random identifiers, whereupon they <em>would</em> be discoverable, by an algorithm essentially identical to Signal’s. Using such a system you could choose either to be completely undiscoverable or, and only if you wanted to be, to be more-or-less discoverable.</p>
<p>That would be easy, wouldn’t it? The Signal people, who are clearly ever so smart, must have thought of that, and decided not to do it: I wonder why?</p>
<p>Well, of course, other people — people who <em>actually</em> care about the safety of these sorts of systems — have not only thought about doing it this way, they <em>have</em> done it this way. <a href="https://threema.ch/" title="Threema">Threema</a> is one such app<sup><a href="#2021-01-16-what-s-wrong-with-signal-s-contact-discovery-footnote-3-definition" name="2021-01-16-what-s-wrong-with-signal-s-contact-discovery-footnote-3-return">3</a></sup>.</p>
<h2 id="the-theatre-of-the-absurd">The theatre of the absurd</h2>
<p>Signal’s authors make a lot of noise about how secure it is. But they know it is, by design, not safe. If you care about safety you should use tools which really are safe rather than tools whose authors treat safety as a matter of theatre.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2021-01-16-what-s-wrong-with-signal-s-contact-discovery-footnote-1-definition" class="footnote-definition">
<p>Whether you can go the other way is not clear: ideally the answer would be ‘no’ but the space of phone numbers is so small that it’s not completely implausible to simply search by brute-force to find out which identities correspond to which numbers if you have the computational resources to do so. However this does not matter here. <a href="#2021-01-16-what-s-wrong-with-signal-s-contact-discovery-footnote-1-return">↩</a></p></li>
<li id="2021-01-16-what-s-wrong-with-signal-s-contact-discovery-footnote-2-definition" class="footnote-definition">
<p>Vladimir is not the sort of person who has friends. <a href="#2021-01-16-what-s-wrong-with-signal-s-contact-discovery-footnote-2-return">↩</a></p></li>
<li id="2021-01-16-what-s-wrong-with-signal-s-contact-discovery-footnote-3-definition" class="footnote-definition">
<p>This article is not an advertisement for Threema: it just happens to be a system I know of which does this. I do not personally use it although it does appear to be very competently designed and implemnted by people who really do care about safety rather than are merely pretending to do so. I am sure there are other similar systems. <a href="#2021-01-16-what-s-wrong-with-signal-s-contact-discovery-footnote-3-return">↩</a></p></li></ol></div>Generic interfaces in Racketurn:https-www-tfeb-org:-fragments-2021-01-08-generic-interfaces-in-racket2021-01-08T18:25:59Z2021-01-08T18:25:59ZTim Bradshaw
<p>Or: things you do to distract yourself from watching an attempted fascist coup.</p>
<!-- more-->
<p>A thing that exists in many languages with a notion of a sequence of objects is a function variously known as <code>fold</code> or <code>reduce</code>: this takes another function of two arguments, some initial value, and walks along the sequence successively reducing it using the function. So, for instance:</p>
<ol>
<li><code>(fold + 0 '(1 2 3))</code> turns into <code>(fold + (+ 0 1) '(2 3))</code> which turns into …</li>
<li><code>(fold + 1 '(2 3))</code> turns into <code>(fold + (+ 1 2) '(3))</code> which turns into …</li>
<li><code>(fold + 3 '(3))</code> turns into <code>(fold + (+ 3 3) '())</code> which turns into …</li>
<li><code>6</code>.</li></ol>
<p>It’s pretty easy to write a version of <code>fold</code> for lists:</p>
<pre><code>(define (fold op initial l)
(if (null? l)
initial
(fold op (op initial (first l)) (rest l))))</code></pre>
<p>Racket calls this (or a more careful version of this) <code>foldl</code>: there is also <code>foldr</code> which works from the other end of the list and is more expensive as a result.</p>
<p>Well, one thing you might want to do is have a version of <code>fold</code> which works on <em>trees</em> rather than just lists. One definition of a tree is:</p>
<ol>
<li>it’s a collection of nodes;</li>
<li>nodes have values;</li>
<li>nodes have zero or more unique children, which are nodes.</li>
<li>no node is the descendant of more than one node;</li>
<li>there is exactly one root node which is the descendant of no other nodes.</li></ol>
<p>A variant of this (which will matter below) is that the children of a node are either nodes or any other object, and there is some way of knowing if something is a node or not<sup><a href="#2021-01-08-generic-interfaces-in-racket-footnote-1-definition" name="2021-01-08-generic-interfaces-in-racket-footnote-1-return">1</a></sup>.</p>
<p>You can obviously represent trees as conses, with the value of a cons being its car, and the children being its cdr. Whatever builds the tree needs to make sure that (3), (4) and (5) are true, or you get a more general graph structure.</p>
<p>But you might want to have other sorts of trees, and you’d want the fold function not to care about what sort of tree it was processing: just that it was processing a tree. Indeed, it would be nice if it was possible to provide special implementations for, for instance, binary trees where rather than iterating over some sequence of children you’d know there were exactly two.</p>
<p>So, I wondered if there was a nice way of expressing this in Racket and it turns out there mostly is. Racket has a notion of <a href="https://docs.racket-lang.org/reference/struct-generics.html">generic interfaces</a> which are really intended as a way for different <a href="https://docs.racket-lang.org/reference/structures.html">structure types</a> to provide common interfaces, I think. But it turns out they can be (ab?)used to do this, as well.</p>
<p>Generic interfaces are not provided by <code>racket</code> but by <code>racket/generic</code>: everything below assumed <code>(require racket/generic)</code>.</p>
<h2 id="a-generic-treelike-interface">A generic <code>treelike</code> interface</h2>
<p>A treelike object supports two operations:</p>
<ul>
<li><code>node-value</code> returns the value of a node;</li>
<li><code>node-children</code> returns a list of the node’s children.</li></ul>
<p>The second of these is a bit nasty: it would be better perhaps to either provide an interface for mapping over a node’s children, or to return some general, possibly lazy, sequence of children. But this is just playing, so I don’t mind.</p>
<p>Here is a definition of a generic <code>treelike</code> interface, which includes default methods for lists:</p>
<pre><code>(define-generics treelike
;; treelike objects have values and children
(node-value treelike)
(node-children treelike)
#:fast-defaults
(((λ (t)
(and (cons? t) (list? t)))
;; non-null proper lists are trees: their value is their car;
;; their children are their cdr.
(define node-value car)
(define node-children cdr))))</code></pre>
<p>Notes:</p>
<ul>
<li>This uses <code>#:fast-defaults</code> instead of <code>#:defaults</code>, which means that the dispatch to objects which satisfy <code>list?</code> happens. This is fine in this case: lists are never going to be confused with any other tree type.</li>
<li>This relies on Racket’s (and Scheme’s?) <code>list?</code> predicate returning true only for proper lists rather than CL’s cheap <code>listp</code> which just returns true for anything which is either <code>nil</code> or a cons.</li>
<li>There are lots of other options to <code>define-generics</code> which I’m not using and many of which I don’t understand.</li></ul>
<p>With this definition:</p>
<pre><code>> (treelike? '())
#f
> (treelike? '(1 2 3))
#t
> (treelike? '(1 2 . 3))
#f
> (node-children '(1 2 3))
'(2 3)</code></pre>
<p>So, OK.</p>
<h2 id="a-treelike-binary-tree">A <code>treelike</code> binary tree</h2>
<p>We could then define a <code>binary-tree</code> type which implements this generic interface:</p>
<pre><code>(struct binary-tree (value left right)
#:transparent
#:methods gen:treelike
((define (node-value bt)
(binary-tree-value bt))
(define (node-children bt)
(list (binary-tree-left bt)
(binary-tree-right bt)))))</code></pre>
<p>The <code>#:methods gen:treelike</code> tells the structure we’re defining the methods needed for this thing to be a <code>treelike</code> object.</p>
<p>And now we can check things:</p>
<pre><code>> (treelike? (binary-tree 1 2 3))
#t
> (node-value (binary-tree 1 2 3))
1
> (node-children (binary-tree 1 2 3))
'(2 3)</code></pre>
<p>OK.</p>
<h2 id="two-attempts-at-a-generic-foldable-interface">Two attempts at a generic <code>foldable</code> interface</h2>
<p>So now I want to define another interface for things which can be folded. And the first thing I tried is this:</p>
<pre><code>(define-generics foldable
;; broken
(fold operation initial foldable)
#:defaults
((treelike?
(define (fold op initial treelike)
(let ([current (op initial (node-value treelike))]
[children (node-children treelike)])
(if (null? children)
current
(fold op (fold op current (first children))
(rest children))))))
((const true)
(define (fold op initial any)
(op initial any)))))</code></pre>
<p>So this tries to define a <code>fold</code> generic function, which has two implementations: one for <code>treelike</code> objects and one for <em>all other objects</em>. So this means that <em>all</em> objects are foldable, and, for instance <code>(fold + 0 1)</code> simply turns into <code>(+ 0 1)</code>. This is a bit odd but it simplifies the implementation of the interface for <code>treelike</code> objects on the assumption that the children of nodes may not themselves be nodes (see above).</p>
<p>There is another complexity: if the list of a <code>treelike</code> node’s children isn’t null, then it’s a <code>treelike</code>, so it can safely be recursed over rather than explicitly iterated over. This is a slightly questionable pun I think, but, well, I am a slightly questionable programmer.</p>
<p>And this … doesn’t work:</p>
<pre><code>> (fold + 0 '(1 2 3))
; node-value: contract violation:
; expected: treelike?
; given: 2
; argument position: 1st</code></pre>
<p>It took me a long time to understand this, and the answer is that the definitions of <code>fold</code> inside the <code>define-generic</code> form <em>aren’t adding methods to a generic function</em>: what they are doing is defining a little local function, <code>fold</code> which <em>then</em> gets glued into the generic function. So references to <code>fold</code> in the definition refer to the little local function. It is exactly as if you had done this, in fact:</p>
<pre><code>(define-generics foldable
;; this is why it's broken
(fold operation initial foldable)
#:defaults
((treelike?
(define fold
(letrec ([fold (λ (op initial treelike)
(let ([current (op initial (node-value treelike))]
[children (node-children treelike)])
(if (null? children)
current
(fold op (fold op current (first children))
(rest children)))))])
fold)))
((const true)
(define (fold op initial any)
(op initial any)))))</code></pre>
<p>And you can see why this can’t work: the <code>fold</code> bound by the <code>letrec</code> calls itself rather than going through the generic dispatch.</p>
<p>The way to fix this is to use the magic <code>define/generic</code> form to get a copy of the generic function, and then call <em>that</em>. This is syntactically horrid, but you can see why it is needed given the above. So a working version of this interface purports to be:</p>
<pre><code>(define-generics foldable
;; not broken
(fold operation initial foldable)
#:defaults
((treelike?
(define/generic fold/g fold)
(define (fold op initial treelike)
(let ([current (op initial (node-value treelike))]
[children (node-children treelike)])
(if (null? children)
current
(fold op (fold/g op current (first children))
(rest children))))))
((const true)
(define (fold op initial any)
(op initial any)))))</code></pre>
<p>And indeed it is not broken:</p>
<pre><code>> (fold + 0 '(1 2 3))
6</code></pre>
<p>and with some tracing added:</p>
<pre><code>> (fold + 0 '(1 2 3))
fold/treelike + 0 (1 2 3)
fold/any + 1 2
fold/treelike + 3 (3)
6</code></pre>
<h2 id="adding-a-special-case-to-fold-for-the-binary-tree">Adding a special case to <code>fold</code> for the binary tree</h2>
<p>So now, finally, we can add a special case to <code>fold</code> to the binary tree defined above, rather than needlessly consing a list of children. We will need the same explicit-generic-function hack as before as the children of a binary tree may not be binary trees.</p>
<pre><code>(struct binary-tree (value left right)
#:transparent
#:methods gen:treelike
((define (node-value bt)
(binary-tree-value bt))
(define (node-children bt)
(list (binary-tree-left bt)
(binary-tree-right bt))))
#:methods gen:foldable
((define/generic fold/g fold)
(define (fold op initial bt)
(fold/g op
(fold/g op (op initial (binary-tree-value bt))
(binary-tree-left bt))
(binary-tree-right bt)))))</code></pre>
<p>And now</p>
<pre><code>> (fold + 0 (binary-tree 1
(binary-tree 2 3 4)
(binary-tree 5 6 7)))
28</code></pre>
<p>and with some tracing</p>
<pre><code>> (fold + 0 (binary-tree 1
(binary-tree 2 3 4)
(binary-tree 5 6 7)))
fold/bt + 0 #(struct:binary-tree 1 #(struct:binary-tree 2 3 4) #(struct:binary-tree 5 6 7))
fold/bt + 1 #(struct:binary-tree 2 3 4)
fold/any + 3 3
fold/any + 6 4
fold/bt + 10 #(struct:binary-tree 5 6 7)
fold/any + 15 6
fold/any + 21 7
28</code></pre>
<h2 id="missing-clos">Missing CLOS</h2>
<p>In some ways this makes me miss CLOS: the explicit-generic-function hack is very annoying, single dispatch is annoying, not being able to define predicate-based methods separately from the <code>define-generics</code> form is annoying. But on the other hand predicate-based dispatch is pretty cool.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2021-01-08-generic-interfaces-in-racket-footnote-1-definition" class="footnote-definition">
<p>Perhaps these should be called ‘sloppy trees’ or something. <a href="#2021-01-08-generic-interfaces-in-racket-footnote-1-return">↩</a></p></li></ol></div>CV19 vaccination staff requirementsurn:https-www-tfeb-org:-fragments-2021-01-03-cv19-vaccination-staff-requirements2021-01-03T12:11:19Z2021-01-03T12:11:19ZTim Bradshaw
<p>Or: how many people will be needed to vaccinate enough people? How many people will keep on being needed?</p>
<!-- more-->
<p>Now that vaccines are available for CV19, an interesting question is how many people it will require to vaccinate enough of the population to produce herd immunity. I had a suspicion that this, or the organisational effort involved<sup><a href="#2021-01-03-cv19-vaccination-staff-requirements-footnote-1-definition" name="2021-01-03-cv19-vaccination-staff-requirements-footnote-1-return">1</a></sup> might be the limiting factor. So I thought I’d try and work out what it the number of people required might be.</p>
<h2 id="a-simple-linear-model">A simple linear model</h2>
<p>Let’s assume that vaccinations happen at a given rate, \(r(t)\), with \(t = 0\) being when they start. Then the number of people vaccinated at a given time \(T\) is</p>
<p>\[
N(T) = \int\limits_0^T r(t)\,dt
\]</p>
<p>If each person takes someone \(\tau\) seconds to vaccinate then the effort required is \(\tau r(t)\). But people don’t work all the time, so the number of <em>people</em> required — the staffing cost — is</p>
<p>\[
S(t) = \frac{\tau}{\eta} r(t)
\]</p>
<p>Where \(\eta\) is the efficiency with which a person works, which includes time to sleep, breaks, weekends and so on.</p>
<p>Obviously all this depends on the form of \(r(t)\), which in real life will be complicated. I’ll assume it takes a simple form: from a low start it ramps up linearly to some value where it then sits until the initial vaccination program is complete.</p>
<p>\[
r(t) =
\begin{cases}
r_0 + kt & 0 \le t \lt t_0\\
r_0 + kt_0 & t \ge t_0
\end{cases}
\]</p>
<p>and</p>
<p>\[
S(t) = \frac{\tau}{\eta}
\begin{cases}
r_0 + kt & 0 \le t \lt t_0\\
r_0 + kt_0 & t \ge t_0
\end{cases}
\]</p>
<p>\(r(t)\) is easy to integrate, giving the form for \(N(t)\):</p>
<p>\[
\begin{aligned}
N(t) &=
\begin{cases}
r_0 t + \frac{k t^2}{2} & 0 \le t \lt t_0\\
r_0 t_0 + \frac{k t_0^2}{2} + (r_0 + k t_0)(t - t0) & t \ge t_0
\end{cases}\\
&=
\begin{cases}
r_0 t + \frac{k t^2}{2} & 0 \le t \lt t_0\\
(r_0 + k t_0)t - \frac{k t_0^2}{2} & t \ge t_0
\end{cases}
\end{aligned}
\]</p>
<p>Finally, assume that the population is \(P\), that we need to vaccinate a proportion \(\rho\) and we want the programme to be complete at a time \(T\), so \(N(T) = \rho P\). And I’ll assume \(T \ge t_0\): this doesn’t actually matter because if \(T = t_0\) you get a model which has no constant part — the rate always increases linearly.</p>
<p>Using this we can now solve for \(k\):</p>
<p>\[
k = \frac{\rho P - r_0 T}{t_0 T - \frac{t_0^2}{2}}
\]</p>
<p>and this gives us the peak staffing level:</p>
<p>\[
S_p
= \frac{\tau}{\eta}
\left(r_0 + \frac{\rho P - r_0 T}{T - \frac{t_0}{2}}\right)
\]</p>
<p>Another thing to work out is the equilibrium staffing level: if the immunity time after vaccination is \(T_i\), then people need to be revaccinated every \(T_i\), and this means that</p>
<p>\[
S_e = \frac{\tau\rho P}{\eta T_i}
\]</p>
<h2 id="some-numbers-for-the-linear-model">Some numbers for the linear model</h2>
<p>The two last expressions above depend on a bunch of parameters: here are some that are both not too frightening in terms of how long it all takes and not too frightening in terms of staffing requirements. I’ll use seconds as the basic unit of time and define \(M = 30 \times 24 \times 3600 = 2592000\): the length of a month in seconds.</p>
<ul>
<li>\(T = 8 M\): the programme should be complete in eight months after it starts.</li>
<li>\(r_0 = 0\): initially no-one is being vaccinated.</li>
<li>\(t_0 = 2M\): it takes two months to ramp up;</li>
<li>\(\tau = 600\): it takes ten minutes of a person’s time to vaccinate someone on average<sup><a href="#2021-01-03-cv19-vaccination-staff-requirements-footnote-2-definition" name="2021-01-03-cv19-vaccination-staff-requirements-footnote-2-return">2</a></sup>.</li>
<li>\(\eta = (5 \times 7)/(7 \times 24) = 5/24\): people work for seven hours a day (not including break time) and work for five days a week. They don’t get holidays.</li>
<li>\(P = 5.5\times 10^6\): the population of England<sup><a href="#2021-01-03-cv19-vaccination-staff-requirements-footnote-3-definition" name="2021-01-03-cv19-vaccination-staff-requirements-footnote-3-return">3</a></sup> is 55 million.</li>
<li>\(\rho = 0.75\): you need to vaccinate about 75% of people to achieve herd immunity.</li>
<li>\(T_i = 12 M\): the immunity time is a year.</li></ul>
<p>Given these figures then</p>
<p>\[
\begin{align}
S_p &\approx 6575\\
S_e &\approx 3819
\end{align}
\]</p>
<p>So, at the peak, there will need to be about 6,575 people working full-time to achieve herd immunity in 8 months, and from then on about 3,819 people may be required to maintain it.</p>
<p>If it takes longer to ramp up then the peak staffing goes up. If it takes 8 months (we never reach the steady state) then the peak staffing number is 11,458.</p>
<p>Here is a plot of the dependency of peak \(S_p\) on both the length of the vaccination programme (x axis, from 4 to 12 months) and the length of the ramp time (y axis, from 0 to 10 months):</p>
<div class="figure"><img src="/fragments/img/2021/vaccination-sr/linear-sr.svg" alt="Peak staffing as function programme length and ramp time, linear model" />
<p class="caption">Peak staffing as function programme length and ramp time, linear model</p></div>
<h2 id="comparing-the-model-with-reality">Comparing the model with reality</h2>
<p>In real life a model where \(r(t)\) ramps up linearly and hence \(N(t)\) quadratically before hitting some nice ceiling is hopelessly oversimplified. But, well, what does a model like this say?</p>
<p>Between the 8th December 2020 and the 27th December 2020 <a href="https://www.england.nhs.uk/statistics/statistical-work-areas/covid-19-vaccinations/" title="NHS England CV19 vaccination statistics">NHS England administered 786,000 vaccinations</a><sup><a href="#2021-01-03-cv19-vaccination-staff-requirements-footnote-4-definition" name="2021-01-03-cv19-vaccination-staff-requirements-footnote-4-return">4</a></sup>. This was a period of 20 days: so if we assume \(r_0 = 0\) and the linear ramp model we can compute \(k\):</p>
<p>\[
\begin{aligned}
k &= \frac{2\times 786\times 10^3}{(20\times 24 \times 60^2)^2}\\
&\approx 5.14\times 10^{-7}
\end{aligned}
\]</p>
<p>The fastest way to vaccinate enough people is simply to keep ramping up the rate by linearly (according to the model) adding staff, or in other words by letting \(t_0 = T\). In this case the time \(T\) to vaccinate enough people is simply:</p>
<p>\[
T = \sqrt{\frac{2\rho P}{k}}
\]</p>
<p>And using the above numbers for \(\rho\) and \(P\), this gives \(T \approx 1.27\times 10^7\,\mathrm{s}\), or about 147 days, or just short of five months. This is not hopeless!</p>
<p>The question is whether that is achievable in terms of staff numbers. Well, the number of staff required in the case where there is no cap on staff is simply</p>
<p>\[
S(t) = \frac{\tau}{\eta} kt
\]</p>
<p>And the peak staffing is therefore \(S_p = (\tau/\eta)kT\). Using values for \(\tau\) and \(\eta\) from before, together with \(k\approx 5.14\times 10^{-7}\) gives \(S_p \approx 18800\). A little short of 19,000 people is probably pretty achievable.</p>
<p>The alternative is to assume that \(S\) is capped somewhere and work out the time the programme will take in that case. Let’s assume that \(S_p = 7000\) then we can compute \(t_0 = (S_p\eta)/(k\tau) \approx 4.72 \times 10^6\,\mathrm{s}\), or about 55 days — a little short of two months. Then we can use the original expression for the time to vaccinate enough people:</p>
<p>\[
\begin{aligned}
T &= \frac{\rho P}{k t_0} + \frac{k t0}{2}\\
&\approx 1.93 \times 10^7\,\mathrm{s}\\
&\approx 224\,\mathrm{d}
\end{aligned}
\]</p>
<p>224 days is about 7.5 months. Which is astonishingly close to my guess for how long it might take.</p>
<p><em>However</em> the vaccination data is somewhat misleading<sup><a href="#2021-01-03-cv19-vaccination-staff-requirements-footnote-5-definition" name="2021-01-03-cv19-vaccination-staff-requirements-footnote-5-return">5</a></sup>: the figures from the 8th to the 27th December 2020 include <em>no</em> second doses. So in fact the number of complete vaccinations given in that interval is just half the headline figure: 393,000 instead of 768,000. Using this amended figure we get different and significantly worse numbers for the time to vaccinate enough people, but better numbers for peak staffing requirements in the uncapped case:</p>
<ul>
<li>\(k\approx 2.57\times 10^{-7}\), just half of the previous value;</li>
<li>without capping staffing, \(T\approx 1.79\times 10^7\,\mathrm{s} \approx 207\,\mathrm{d}\) or nearly seven months, with a peak staffing requirement, \(S_p \approx 133000\).</li>
<li>capping staffing at \(S_p = 7000\) gives \(t_0 \approx 9.45 \times 10^6\,\mathrm{s} \approx 109\,\mathrm{d}\) or somewhat over 3.5 months, and \(T \approx 2.17\times 10^7\,\mathrm{s} \approx 251\,\mathrm{d}\), or about 8.5 months.</li></ul>
<p>The times are less than double because the number of vaccinations goes like the square of time during the ramp.</p>
<p>At the time I’m writing, the <a href="https://www.nhs.uk/conditions/coronavirus-covid-19/coronavirus-vaccination/coronavirus-vaccine/">English NHS is intending to delay the second dose</a>:</p>
<blockquote>
<p>The latest evidence suggests the 1st dose of the COVID–19 vaccine provides protection for most people for up to 3 months.</p>
<p>As a result of this evidence, when you can have the 2nd dose has changed. This is also to make sure as many people can have the vaccine as possible.</p>
<p>The 2nd dose was previously 21 days after having the 1st dose, but has now changed to 12 weeks after. […]</p></blockquote>
<p>So although I don’t think it’s safe to assume that there will be no second vaccinations during the initial programme, it <em>is</em> clear that the main aim is to do as many first doses as possible as quickly as possible. So in real life the times and staffing requirements might be somewhere between the two.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Based on English NHS data on vaccinations given between 8th and 27th December 2020, using a very simple linear model (see above for the model and its parameters) and assuming no second doses, then, assuming that staffing numbers can be ramped up to 7,000 in 55 days from the 8th of December 2020, then England (and presumably the UK as a whole) might well be able to vaccinate enough people in about 224 days from the that date: by mid to late July 2021. If staffing numbers can be ramped up indefinitely at the same rate, then this could be done in 147 days, or by early May 2021, with a peak staffing requirement a little short of 19,000.</p>
<p>The same model using the same data but including second doses (which halves the number of doses given between 8th and 27th December 2020), staff numbers need to be ramped to 7,000 in 109 days from 8th December 2020 (this is less aggressive), and the programme will take about 251 days and might be complete by mid August 2021. If staffing numbers can be ramped indefinitely at the initial rate, the programme could be complete in 207 days, or by early July 2021, with a peak staffing requirement of a little less than 13,500.</p>
<p>Using the same model and assuming immunity lasts for a year, a little short of 4,000 people will be needed to maintain the vaccination programme indefinitely.</p>
<p>All of this relies on a vaccination requiring a total of only 10 minutes of human effort: not only frontline staff but also the time spent shipping and handling the vaccine, administative overhead and so on. It also neglects holiday time for the staff involved. Including holidays and sickness would increase the staffing requirements by 10–20%, assuming 30 days holiday a year. This is also based on very early data on achieved vaccination rates.</p>
<p>If this is even a very rough approximation to the truth then, assuming some organisational competence (a dangerous assumption when the UK government and its fellow travellers is involved), staffing should not be the limiting factor in the vaccination programme.</p>
<p>Reality may be a little more complicated than this.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2021-01-03-cv19-vaccination-staff-requirements-footnote-1-definition" class="footnote-definition">
<p>Given the magnificent ability of the UK government to organise things I am sure that the organisational problems will simply melt away. One only needs to look at the splendour that is Dido Harding, and her truly wonderful achievements with the test and trace effort in mid 2020 to see this. <a href="#2021-01-03-cv19-vaccination-staff-requirements-footnote-1-return">↩</a></p></li>
<li id="2021-01-03-cv19-vaccination-staff-requirements-footnote-2-definition" class="footnote-definition">
<p>This seems like a terrifyingly large figure. For instance appointments for ’flu vaccinations at a surgery near me were recently spaced three minutes apart. But remember that this figure is for two doses, and should include <em>all</em> the time spent by people to give the vaccination: it should include the time spent doing administrative work, delivering the vaccine and so on. I think ten minutes is probably an underestimate, and perhaps a serious one. <a href="#2021-01-03-cv19-vaccination-staff-requirements-footnote-2-return">↩</a></p></li>
<li id="2021-01-03-cv19-vaccination-staff-requirements-footnote-3-definition" class="footnote-definition">
<p>I’m picking England rather than the UK since the healt service is devolved. <a href="#2021-01-03-cv19-vaccination-staff-requirements-footnote-3-return">↩</a></p></li>
<li id="2021-01-03-cv19-vaccination-staff-requirements-footnote-4-definition" class="footnote-definition">
<p>There seems to be significantly higher figure number available for the same time period: I think that higher figure must be for the UK as a whole. <a href="#2021-01-03-cv19-vaccination-staff-requirements-footnote-4-return">↩</a></p></li>
<li id="2021-01-03-cv19-vaccination-staff-requirements-footnote-5-definition" class="footnote-definition">
<p>I am sure the NHS is not trying to mislead people. I am equally sure that the government will abuse the data to do just that. <a href="#2021-01-03-cv19-vaccination-staff-requirements-footnote-5-return">↩</a></p></li></ol></div>Backup retentionurn:https-www-tfeb-org:-fragments-2021-01-02-backup-retention2021-01-02T17:33:01Z2021-01-02T17:33:01ZTim Bradshaw
<p>Or: should you keep that tape?</p>
<!-- more-->
<p>There is an interesting curve of backup retention.</p>
<p>Initially, you should definitely keep them because they’re, well, backups that you might need to restore.</p>
<p>Then there comes a time where you should almost certainly not keep them because they’re too old to be useful as backups.</p>
<p>If they survive that they become, accidentally, archives: perhaps that tape sitting in some box has the only remaining copy of whatever-it-is. So don’t erase it.</p>
<p>At the point where nothing will read the tape any more, well, whatever was on it is effectively gone now, so throw it away.</p>
<p>At some point after that, one or both of two things happen: people become willing and able to do seriously heroic things to read really old media which might have the last remaining copy of something on it and/or the media itself becomes rare enough that it’s now a historical artifact worth preserving. The second thing can’t happen unless enough copies of the media get thrown away in earlier phases of the process: I don’t think minidiscs would be interesting historical artifacts (yet), but if I still had a Fuji Eagle I would definitely not throw it away.</p>
<p>Later still it becomes possible to print, cheaply, replicas accurate at the atomic level of the thing, at which point its value should drop to the cost of making another clone, but in fact people start spending huge amounts of effort authenticating the original copy of the object which is held to be somehow ineffably different to all the perfect clones. At some point, people lose track: no-one now knows which the original <em>is</em> any more, and since there is no physical distinction no-one ever will again. The people who have paid to have their copy authenticated as the original now spend much of their time arranging to have the other people who have done that assassinated.</p>
<p>I forget which film this is.</p>Unhappy far-off thingsurn:https-www-tfeb-org:-fragments-2020-12-23-unhappy-far-off-things2020-12-23T11:42:20Z2020-12-23T11:42:20ZTim Bradshaw
<blockquote>
<p>It is the Abomination of Desolation, not seen by prophecy far off in some fabulous future, nor remembered from terrible ages by the aid of papyrus and stone, but fallen on our own century, on the homes of folk like ourselves: common things that we knew are become the relics of bygone days. It is our own time that has ended in blood and broken bricks.</p></blockquote>
<!-- more-->
<p>It can’t happen here, can it? Of course it can not: this is something that happens to other people in lesser countries far away. Something we read about in newspapers or watch in enchanted horror on the news. We watch as some unhappy country eats itself alive, vomiting forth a spray of refugees who, somehow but inevitably, we will not be able to accept here, though they be ever so deserving. And of course these distant tragedies are never our fault, not even slightly.</p>
<p>No, these tragedies can not happen here: we are too clever, too well-educated, too English. We have too much to lose so it will not be allowed. And if it were to happen here it would if course not be our fault: it would most certainly be the doings of inferior foreign people who wish us ill. We are, after all, simply better fellows than those unhappy far-off people.</p>
<hr />
<p>Quote from <em>Unhappy far-off things</em> by Lord Dunsany.</p>An Englishman's camera bagurn:https-www-tfeb-org:-fragments-2020-12-18-an-englishman-s-camera-bag2020-12-18T11:06:47Z2020-12-18T11:06:47ZTim Bradshaw
<p>Or: you can’t buy history, however much money you have.</p>
<!-- more-->
<p>Billingham bags are beautifully-made leather-and-canvas things, which when new probably smell of nothing and when later cleaned might smell faintly of leather and old sails. Both the canvas and the leather will wear prettily over the decades. You could imagine leaving such a bag to your younger son in your will (your oldest son would, of course, get the house on Long Island, along, perhaps, with your mistress)<sup><a href="#2020-12-18-an-englishman-s-camera-bag-footnote-1-definition" name="2020-12-18-an-englishman-s-camera-bag-footnote-1-return">1</a></sup>.</p>
<p>Billingham bags are what Americans who believe themselves cultured think English gentlemen might use: they are, in fact, New English. Of course, no Englishman would be seen dead with a Billingham bag, let alone in polite company, any more than they would be seen about with a Leica (“the Rolls-Royce of cameras”: ostentatious, vulgar, probably available in pink).</p>
<p>An Englishman’s camera bag is nothing like a Billingham. Indeed, it is not very much like a camera bag. It is made of a material which might once have been waxed cotton but is now mostly grease and patches. It smells of mould, ferrets, fixer and old blood — it is usually better not to ask where the blood came from. It is not, of course, padded: the owner will improvise padding from folded up broadsheet newspapers, none later than the 60s. It may have a strap, and this may not be made entirely of string. In one of the outer pockets there will be a quarter-plate darkslide for a model of camera not made since before the great war. In others there will be OS maps of Afghanistan, passports (all expired) several glass syringes and Kendal mint cake. In the bottom of the bag will be a dense layer of detritus including feathers, the mummified remains of a mouse, some filters in imperial sizes, a Watkins Bee meter possibly in working order, much string, a remote release apparently partly eaten, bits of film and what might be the remains of the original strap. It is better not to investigate this layer too closely.</p>
<p>The Englishman’s camera bag is not left to his children. Rather, they discover it years after his death in a cupboard, slightly mouldier than it once was and apparently having served as a home to several generations of small birds. No one particularly wants it, but since it is, somehow, useful (certainly more useful than something made of leather, canvas and salt air), it is adopted and so persists.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2020-12-18-an-englishman-s-camera-bag-footnote-1-definition" class="footnote-definition">
<p>Note: this article is <em>intentionally</em> using sexist language and ideas which are, frankly, offensive. Its entire purpose is to satirise both a certain sort of photographer (always male, usually American) and a certain sort of English person (again, always male, and who likes his martinis shaken and not stirred). I am not either of these sorts of people and I certainly do not support the attitudes in this article. I also did not inherit my father’s camera bag, although I did inherit his Curta. <a href="#2020-12-18-an-englishman-s-camera-bag-footnote-1-return">↩</a></p></li></ol></div>The Boris maximizerurn:https-www-tfeb-org:-fragments-2020-12-11-the-boris-maximizer2020-12-11T18:37:08Z2020-12-11T18:37:08ZTim Bradshaw
<p>Or, a theory about the mess we’re in.</p>
<!-- more-->
<p>The UK is about to finally leave the EU as the transition period finishes at the end of the year. Leaving the EU was always a terrible idea, based on appealing to a combination of the bigotry of mostly-older voters<sup><a href="#2020-12-11-the-boris-maximizer-footnote-1-definition" name="2020-12-11-the-boris-maximizer-footnote-1-return">1</a></sup> and falsified memories of a golden age of English glory which never existed. But the decision is made: the UK has chosen to fade into irrelevance and poverty<sup><a href="#2020-12-11-the-boris-maximizer-footnote-2-definition" name="2020-12-11-the-boris-maximizer-footnote-2-return">2</a></sup> and that can’t be undone any time soon.</p>
<p>What it could have decided to do was to minimize the damage by agreeing a trade deal on good terms with the EU to minimize the harm done by brexit. It will almost certainly fail to do that: the EU, very reasonably, wants to ensure that a country granted privileged access to its markets can’t then undercut the EU’s own members by lowering standards. <a href="http://theconversation.com/brexit-talks-the-sticking-points-explained-151706" title="Brexit talks: the sticking points explained">This position was clear</a> <a href="http://www.theguardian.com/politics/2020/dec/13/the-eus-red-lines-were-clear-in-2016" title="The EU's red lines were clear in 2016">before the referendum</a> and has not changed since then. The UK government, having pretended to be unaware of this position, now finds it unacceptable: I suppose because undercutting the EU by lowering standards is exactly what it wants to do. And there is some stupidity about fishing as well.</p>
<p>So, at the end of the year when the transition period ends, the UK will probably leave with no deal at all<sup><a href="#2020-12-11-the-boris-maximizer-footnote-3-definition" name="2020-12-11-the-boris-maximizer-footnote-3-return">3</a></sup>, which will be an immediate catastrophe: there is a quite serious possibility of food shortages for instance. Almost no-one who voted leave in 2016 voted for this, even those who understood what they were voting for.</p>
<p>How did we get into this mess?</p>
<h2 id="disaster-capitalism">Disaster capitalism</h2>
<p>There is a theory, well-described by <a href="https://www.antipope.org/charlie/blog-static/2020/12/so-you-say-you-want-a-revoluti.html" title="So you say you want a revolution">Charlie Stross</a> and others that what has happened is that a small group of clever-but-evil people have taken over the Conservative party and, with the support of a larger group of bigots, have consciously tried to achieve this outcome, so that they can profit from the resulting chaos. This is <a href="https://en.m.wikipedia.org/wiki/The_Shock_Doctrine" title="The shock doctrine">disaster capitalism</a>: the idea that small factions are deliberately causing disasters so that in they can force through measures which will benefit them in the aftermath, and which people will not notice amongst all the smoke and rioting.</p>
<p>That sounds plausible: there are certainly plenty of bigots and xenophobes, particularly in the British tory party. And certainly the original brexit vote was driven very substantially by bigots and xenophobes. So, well, this is a situation ripe to be exploited by a small clique of disaster capitalists, isn’t it? Perhaps this clique is headed by Jacob Rees-Mogg who is certainly evil, certainly a financier, and also an important member of the European Research Group which was one of the groups which helped drive brexit.</p>
<p>Well, this all sounds very reasonable, then: there is a conspiracy by a small hidden group of financiers who have gained control of the tory party and are driving the UK into the ground to enrich themselves. How obvious.</p>
<p>Except, wait. Didn’t some other group of people once believe a theory a bit like this? That there was a group — a cabal in fact — of financiers who were working behind the scenes to cause chaos and destruction (and there certainly was chaos and destruction) to enrich themselves at the cost of the good, honest, ordinary folk of the country? What was the name of that country, again, and who were the people who believed this? Ah, yes, it was Germany, and the people who believed this were the nazis. And it didn’t end well, did it?</p>
<p>Of course, the theories are not identical, and I am very sure that many people who believe the disaster capitalism theory are not antisemites<sup><a href="#2020-12-11-the-boris-maximizer-footnote-4-definition" name="2020-12-11-the-boris-maximizer-footnote-4-return">4</a></sup>, let alone nazis. One crucial difference is that membership of the supposed cabal of disaster capitalists is something a person can choose of their own free will, while if you are Jewish you are so because of your ancestry which you can not choose. The lies the nazis told about a mythical cabal of Jewish financiers, along with all the other lies they told about Jews, were clearly a lot more toxic than the idea of a cabal of disaster capitalists within and behind the tory party.</p>
<p>But they are both conspiracy theories: they both assume there is a small group of people working, mostly in secret, to cause chaos and disaster from which they will benefit hugely at the cost of the ordinary, honest, working folk. And Something Must Be Done about this, and that Something might include an uprising and, perhaps, in due course, camps of some kind where the conspirators could be, well, processed.</p>
<h2 id="the-nature-of-the-conspiracy">The nature of the conspiracy</h2>
<p>It’s in the nature of conspiracy theories to be false, because people are not very good at conspiring, and when they do conspire they’re not very good at keeping the conspiracy secret.</p>
<p>But the disaster capitalist theory also relies on another common notion: the idea that, somewhere very close at hand but always just out of sight, there exists a group of people who are enormously more competent, or machines that are enormously more capable, or drugs that are enormously better than anything to which we have access. Sometimes it is also clear that we are being actively <em>denied</em> access to this superior technology, perhaps by these invisible superior people. I call this notion the <em>myth of competence</em>.</p>
<p>Just sometimes, the myth of competence is not a myth: the people who put humans on the Moon were actually pretty good at what they did, even if they only became as good as they were by going through <a href="https://en.m.wikipedia.org/wiki/Apollo_1" title="Apollo 1 fire">an awful and unnecessary accident</a>. The people at Bletchley Park during the second war were also pretty good. And there are other examples, of course.</p>
<p>But almost always the myth of competence really is a myth: something people want to be true which is not actually true.</p>
<p>A good example of the myth of competence is the NSA. The NSA, obviously, is staffed by the most elite mathematicians and computer scientists: people who are just better than everyone else. People with a deep understanding of the security of computing systems, working for an organisation with hugely deep pockets. The NSA is just spookily good as well as, conveniently, just out of sight. And yet in 2013, a contractor to the NSA was able to acquire a vast trove of sensitive data from them, something that would not be possible if their security was at all competent. The NSA, in fact, are incompetent, or at least they were so in 2013 and almost certainly they still are.</p>
<p>And this isn’t surprising, in fact. Let’s imagine that you’re a smart person with an interest in sifting through big data to look for patterns. You have a couple of career options.</p>
<ul>
<li>You could go to work for a web company, where you will get to deal with as much data as you want, where you get to go to parties with other nerds and talk about the cool stuff you are doing, and where you stand a chance you can persuade yourself is reasonable of getting rich. You can probably also fool yourself that what you are doing is ethical<sup><a href="#2020-12-11-the-boris-maximizer-footnote-5-definition" name="2020-12-11-the-boris-maximizer-footnote-5-return">5</a></sup>. If the job doesn’t suit you you can move to another company, or you could start your own, giving you a rather smaller chance of getting very rich indeed.</li>
<li>Or you could get a job with the security services. You will not be able to tell anyone outside your workplace what you do. You will not get rich (you might not even get a decent pension nowadays). You won’t easily be able to change jobs, at least not outside the organisation you work for. Given who your ultimate masters are and what they do to people who they don’t like, you might have worries about your safety in the longer term, and you certainly would when you realize just how unethical what they are doing is and decide to tell someone about that.</li></ul>
<p>Which career sounds more appealing? Well, clearly some people are attracted to the whole cloak-and-dagger aspect of the second option. Those people tend also to own rather too much camouflage clothing and take paintball games altogether too seriously. For the rest of us the chance to do things which are just as technically interesting while fooling ourselves that we might get rich is probably rather more compelling.</p>
<p>And so it turns out that the NSA isn’t, in fact, staffed by super-intelligent super-competent people after all: it’s staffed by the people Google and Facebook didn’t hire.</p>
<p>The disaster capitalist theory assumes that there is a cabal of evil super geniuses — the disaster capitalists — who are working in secret to destroy the country for their own benefit, probably from their sinister supervillain lairs in hollowed-out volcanoes. Somewhere, behind the incompetence and stupidity of the tory party we can see, exists a group of evil geniuses who, somehow, we never can quite see. This is both a conspiracy theory and a classic example of the myth of competence. I suggest that it is not true, and that there is an alternative, simpler, explanation.</p>
<h2 id="maximizers">Maximizers</h2>
<p>There is a <a href="https://www.lesswrong.com/tag/paperclip-maximizer" title="The paperclip maximizer">famous</a> <a href="https://www.nickbostrom.com/ethics/ai.html" title="Ethical issues in advanced artificial intelligence">thought experiment</a> about an imagined artificial general intelligence (AGI) whose goal is to maximize the number of paperclips in the universe, and which proceeds to to that, with bad consequences for humans and, ultimately, probably itself too, as it turns everything into facilities for making paperclips.</p>
<p>This seems like a fairly silly idea, not least because of the G in ‘AGI’: general intelligence is, generally, not associated with this kind of monomania. It is quite close to the way that <em>genes</em> work: the ‘purpose’ of a gene, or replicator, is to make as many copies as possible of itself, and this drives evolution. But, well, we do seem to be dealing with monomaniacs of one kind or another, and it’s an interesting idea to explore.</p>
<p>The first fairly obvious thing is that maximizers can lead to quite nasty consequences: the paperclip maximizer destroys everyone and everything in order to make more paperclips, for instance, and 2020 has shown that packages of genes which replicate at the costs of the organisms hosting them can be quite bad news, in case we had forgotten that.</p>
<p>The second thing is that maximizers can run into a nasty problem: local maxima. You can think of a maximizer as something which is walking around on the surface of some function which it is trying to maximize. An obvious approach is to calculate the gradient of the function and then move in the direction where it is steepest. At the point where the gradient is zero and the second derivatives are all negative then you’ve reached a maximum. This technique is called <em>gradient ascent</em>, or, equivalently when used to find minima, <a href="https://en.m.wikipedia.org/wiki/Gradient_descent" title="gradient descent">gradient descent</a>. It seems like a good strategy if you don’t think about it too hard. But consider what would happen if you were trying to maximize your altitude on Earth using this strategy, and you started in Scotland. If you were very lucky indeed you might get to the summit of Ben Nevis, from which all directions lead down. But Ben Nevis completely fails to be the highest point on the Earth’s surface: it’s a local maximum, not a global one. And more likely if you start where I used to live, you’d end up at the top of Lady Fife’s Brae on Leith Links which isn’t even the highest point on Leith links.</p>
<p>To deal with this problem maximizers need to be able to explore bits of the space far from where they currently are, so they can see whether they would do better by moving far away. This requires various clever tricks: a dumb maximizer will end up getting trapped on local maxima most of the time.</p>
<p>As well as being close to the way genes work, the idea of maximizers is also fairly close to the way that a lot of economists think about people: people are assumed to spend their time trying to maximize their <a href="https://en.m.wikipedia.org/wiki/Utility_maximization_problem" title="utility maximisation problem">utility</a>. Well, this is true, but usually it’s vacuous because ‘utility’ for most people has a definition which is unknown but certainly extremely complicated, and the maximization method they use is also unknown but almost certainly complicated. So saying people are trying to maximize their utility means, really, nothing: it just helps economists feel as if what they are doing is science.</p>
<p>But sometimes, for some people, it does mean something. Some people have a utility function which is obvious and relatively simple. Conveniently, these people often also only have very rudimentary maximizers.</p>
<h2 id="the-boris-maximizer">The Boris maximizer</h2>
<p>Boris Johnson is such a person. Boris Johnson’s utility function is Boris Johnson: his only purpose in life is that there should be maximum Boris: more power, more glory, more worship for Boris. He cares about this to the exclusion of all else: he is the Boris maximizer. And like many people with utility functions this obvious and simple his technique for achieving maximum Boris is also rather simple: it’s either gradient ascent or something very close to it.</p>
<p>The standard term for people like Johnson, of course, is <a href="https://en.wikipedia.org/wiki/Narcissism" title="narcissism">narcissism</a>, and there is a lot written about narcissists and narcissism. Since I want to understand how narcissists end up driving themselves and others into bad places, I’m going to stick with the notion of narcissists as rather simple maximizers of themselves rather than get lost in a wash of made-up pop psychology.</p>
<p>Another important thing about Johnson is that he’s a very pure example of what he is. Trump is a narcissist of course, but Trump <em>also</em> is full of fear, resentment and bigotry: Johnson isn’t. Johnson is upper-class, rich, went to the right school and university and has exactly the belief system you would expect from his background: he has never questioned anything, never thought deeply about anything, and he is not envious of anyone because why would he be? He is not, in fact, able to think hard enough about anything to even see that there might be a problem: doing so would require thinking about other people as more than tools for maximizing Boris, and he certainly is not able to do that.</p>
<p>Boris Johnson has nothing in his head but maximizing Boris: he is the paperclip maximizer made flesh.</p>
<h2 id="the-mess-were-in">The mess we’re in</h2>
<p>Firstly, <em>there is no brexit cabal</em>: there is no secret group of disaster capitalists scheming to destroy the UK, and still less is there some hidden group of clever brexiteers in the tory party: the closest they have to that is Dominic Cummings, who is at least not stupid, but is also a crank: someone who does not realize that there are things he doesn’t understand and who certainly is not as clever as he thinks he is. For the rest of the brexiteers in government, well, the people we can see are the people there are, and they are not pretending to be incompetent and stupid: they <em>are</em> incompetent and stupid.</p>
<p>Secondly <em>Boris Johnson doesn’t care about brexit</em>: Boris Johnson cares only about Boris Johnson. He is purely a machine for increasing the glory and worship of himself: a Boris maximizer.</p>
<p>Thirdly, while there is no cabal, there <em>are</em> a significant number of people in the tory party and elsewhere who are xenophobes and bigots and who believe in an invented idea of a golden age of England<sup><a href="#2020-12-11-the-boris-maximizer-footnote-6-definition" name="2020-12-11-the-boris-maximizer-footnote-6-return">6</a></sup> which is now, as it always has been, just beyond living memory<sup><a href="#2020-12-11-the-boris-maximizer-footnote-7-definition" name="2020-12-11-the-boris-maximizer-footnote-7-return">7</a></sup>. These people want desperately to leave the EU because it is full of foreign people and is holding back their imagined restoration of the golden age of England<sup><a href="#2020-12-11-the-boris-maximizer-footnote-8-definition" name="2020-12-11-the-boris-maximizer-footnote-8-return">8</a></sup>.</p>
<p>In 2016, David Cameron made the disastrous mistake of calling a referendum to make these people go away. At that point Johnson had to make the only decision a maximizer ever makes: what should he do to maximize Boris? Competent people understand that allowing maximizers access to power is extremely dangerous: no competent group of people would ever select Johnson for high office. In a tory party run by competent people he would never achieve the glory he deserved. But he might achieve it in one run by incompetent people. So he threw in his lot with the brexiteers.</p>
<p>And the brexiteers won: there really are a lot of aged bigots in the UK, it turns out.</p>
<p>The government then spent nearly three years falling about, as it became apparent that the brexiteers who had nominally won had no plan at all for what to do because they were simply not smart enough to think through the consequences of what they wanted. Indeed the only idea they had seemed to be that the remainers should do their planning for them, in much the way that adults do the planning for their children.</p>
<p>During the period of falling about the attitudes of the brexiteers hardened: as it became more and more clear how confused and stupid their aims were, they became more and more rigid in their thinking: they turned into fanatics. As fanatics they are unwilling, ever, to consider any ideas in contradiction with their fanaticism and unwilling, ever, to give up, whatever the cost. These are not people you want in government<sup><a href="#2020-12-11-the-boris-maximizer-footnote-9-definition" name="2020-12-11-the-boris-maximizer-footnote-9-return">9</a></sup>.</p>
<p>In 2016, Johnson failed at his maximization project: there were enough competent people left, then, to keep him well away from any real power, at least for a while. But this didn’t last: in 2019 the fanatics won, and Johnson finally achieved maximum Boris: he became prime minister (or as he probably thinks of it, ‘world king’). But even as he was annointed he was in terrible trouble, although he did not realize it then and probably still does not.</p>
<p>The trouble he was in is that he can only operate by gradient ascent and he had achieved this on the back of increasingly fanatical brexiteers with a serious competence problem. If he started listening to the competent people (there still were some, then), and doing what they suggested — for instance cutting a good deal with the EU — the fanatics would hate him for betraying their cause, and even if they did not do what fanatics often do to those who betray them, their hatred alone would certainly temporarily <em>reduce the amount of Boris</em>. Gradient ascent will not allow this, and so the competent people were systematically driven out of government, to be replaced by fanatics whose endless chants of praise would further maximize Boris. Never mind that they were also grossly incompetent: competence is not relevant to maximizing Boris.</p>
<p>He was now stuck on top of a local maximum.</p>
<p>And there he remains: all around the little hill he is sitting on are deep valleys, the crossing of which means a temporary reduction in Boris which gradient ascent will not allow. A little way away, in clear view, there are other, much larger hills, on top of which there would be far more Boris. But he can not reach them, because he can not, ever, reduce the amount of Boris.</p>
<p>And so there will be no deal with the EU: not because of a cabal of evil disaster capitalists somewhere just out of sight but because of the incompetence and fanaticism in full view in the government. And that incompetence and fanaticism is sustained by Johnson’s goal of maximizing Boris and his inability to do anything to reduce the amount of Boris, however temporarily, and even if doing so would ultimately increase it.</p>
<p>Well, he can not, but history will. Boris Johnson could have chosen to be the prime minister who minimized the damage to the UK from brexit, or even the person whose decision in 2016 to support remain led to the UK staying in the EU. But he will not be: he will be the prime minister who oversaw a no deal exit, whose actions led to the break-up of the UK and who, because he had surrounded himself with incompetent fanatics, caused many thousands of unnecessary deaths from CV19. Perhaps he even dimly knows this.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2020-12-11-the-boris-maximizer-footnote-1-definition" class="footnote-definition">
<p>At the time of the referendum 27% of voters then aged 18—24 voted to leave, rising to 60% of voters then aged 65 or older. The younger voters, of course, will be most affected by the decision as they will live more years with their life chances restricted by it. Indeed many of the older cohort will already be dead and thus will have voted purely to damage other people’s chances, having themselves benefited from membership of the EU for most of their lives. The demographics is such, in fact, that there is almost certainly now no majority support for leaving the EU and has not been for several years. <a href="#2020-12-11-the-boris-maximizer-footnote-1-return">↩</a></p></li>
<li id="2020-12-11-the-boris-maximizer-footnote-2-definition" class="footnote-definition">
<p>And it hasn’t only chosen that. As I’ve written previously: the consequences of brexit — even a ‘good’ brexit — will be that the administrative part of the UK’s government (probably, soon, this means the government of England and Wales, after Scotland secedes and rejoins the EU) will be working at or beyond its capacity for a decade. That decade, of course, is the decade which action must be taken if we are to avoid catastrophic global warming. The UK, therefore, will play no useful part in dealing with global warming, and thus further increase the chances of a catastrophe which will kill billions of humans, mostly not yet born. Conveniently, almost all the brexit voters will be dead by the time this matters. <a href="#2020-12-11-the-boris-maximizer-footnote-2-return">↩</a></p></li>
<li id="2020-12-11-the-boris-maximizer-footnote-3-definition" class="footnote-definition">
<p>As I write this, there is still some fading hope that a deal will be struck, but neither the EU or the UK sound at all optimistic, and there is very little time left. <a href="#2020-12-11-the-boris-maximizer-footnote-3-return">↩</a></p></li>
<li id="2020-12-11-the-boris-maximizer-footnote-4-definition" class="footnote-definition">
<p>Although the UK Labour party, where there will be many believers in the disaster capitalism theory, has had a rather serious problem with antisemitism recently. These two things may not be related, of course: but they may be. <a href="#2020-12-11-the-boris-maximizer-footnote-4-return">↩</a></p></li>
<li id="2020-12-11-the-boris-maximizer-footnote-5-definition" class="footnote-definition">
<p>Or you could in 2013 when I wrote the text from which this section is extracted: not so much now, I think. <a href="#2020-12-11-the-boris-maximizer-footnote-5-return">↩</a></p></li>
<li id="2020-12-11-the-boris-maximizer-footnote-6-definition" class="footnote-definition">
<p>Not Scotland, not Wales, certainly not Northern Ireland: England. <a href="#2020-12-11-the-boris-maximizer-footnote-6-return">↩</a></p></li>
<li id="2020-12-11-the-boris-maximizer-footnote-7-definition" class="footnote-definition">
<p>This, of course, is related to the myth of competence: somewhere, just before we were born, there was a golden (or, in fact, a white) England where flowers bloomed, birds sang, and everyone was happy. And the fact that flowers don’t bloom and birds don’t sing and everyone is miserable is nothing to do with our actions in systematically poisoning the land. No, it’s <em>someone else’s</em> fault: probably it’s those Europeans, in fact. <a href="#2020-12-11-the-boris-maximizer-footnote-7-return">↩</a></p></li>
<li id="2020-12-11-the-boris-maximizer-footnote-8-definition" class="footnote-definition">
<p>There are also a very small number of people — Douglas Carswell for instance — who believe in brexit for reasons which are not simple bigotry: they’re wrong, but they’re not bigots. But these people are in a small minority of brexiteers, of whom the great majority are, like it or not, bigots. <a href="#2020-12-11-the-boris-maximizer-footnote-8-return">↩</a></p></li>
<li id="2020-12-11-the-boris-maximizer-footnote-9-definition" class="footnote-definition">
<p>To repurpose an old joke about guitar players: what’s the difference between a brexiteer and a terrorist? You can negotiate with a terrorist. <a href="#2020-12-11-the-boris-maximizer-footnote-9-return">↩</a></p></li></ol></div>Grammar nazis and actual nazisurn:https-www-tfeb-org:-fragments-2020-11-17-grammar-nazis-and-actual-nazis2020-11-17T13:18:52Z2020-11-17T13:18:52ZTim Bradshaw
<p>In the old, more innocent days of the internet — the days when people naïvely assumed that nazis were figures from a dark past, fading slowly into history — people who were very pedantic about prescriptive rules of grammar were often called ‘grammar nazis’. We did not think, then, that grammar nazis might be unconsciously encouraging actual nazis. But I think they were, and are.</p>
<!-- more-->
<p>Recently, Jonathan Bouquet wrote <a href="https://www.theguardian.com/theobserver/commentisfree/2020/nov/15/may-i-have-a-word-about-the-language-of-epidemiologists">this</a> article in <em>The Observer</em>, from which I quote:</p>
<blockquote>
<p>Epidemiologist after epidemiologist warns that we must modify our “behaviours” if we are to counter the pandemic. Quite when it became obligatory for this horde of “experts” to pluralise the word is not known, but I do wish they would desist. And given their track record during coronavirus, with certain honourable exceptions, how many would dare to admit their profession if they were to be asked at a party what they did for a living? They’d be far better off saying they were an actuary.</p></blockquote>
<p>There are many things wrong with this paragraph.</p>
<p>First of all, while Mr Bouquet is clearly at least playing at being unaware of it, natural languages do change over time. In some socially-privileged dialects of English a century ago, ‘behaviour’ was a mass noun and thus ’*behaviours’ was not correct in those dialects. But groups of people who deal with the behaviour of humans, or animals, or computer systems, found they needed a count noun: they needed a term, for instance, to talk about ‘washing your hands’ and ‘wearing a mask’ and whether people are doing one or both of these things. For a while, perhaps, clumsy terms like ‘behaviour patterns’ were used, but then the humans who actually get to define the language they speak — not Mr Bouquet, who merely gets to snipe at them because they are not using the language he learned at school any more — change the language, and ‘behaviour’ became usable as a count noun in the dialect used by these groups: ‘washing your hands’ is now a behaviour and ‘washing your hands and wearing a mask’ are<sup><a href="#2020-11-17-grammar-nazis-and-actual-nazis-footnote-1-definition" name="2020-11-17-grammar-nazis-and-actual-nazis-footnote-1-return">1</a></sup> two behaviours.</p>
<p>The inability to understand that the language can change and has changed — or the pretence of not understanding that for rhetorical purposes — merely makes Mr Bouquet look rather silly. Why is he also encouraging bigotry? Well, let’s look at an excerpt from the paragraph quoted above:</p>
<blockquote>
<p>Quite when it became obligatory for this horde of “experts” to pluralise the word is not known, but I do wish they would desist.</p></blockquote>
<p>Look at the scare quotes and the term ‘horde’<sup><a href="#2020-11-17-grammar-nazis-and-actual-nazis-footnote-2-definition" name="2020-11-17-grammar-nazis-and-actual-nazis-footnote-2-return">2</a></sup>: Mr Bouquet clearly does not think (or is pretending not to think) that epidemiologists are actually experts: they’re just people who pretend to be experts. A bit like Mr Bouquet, in fact, who is pretending to be an expert on English, but is not, any more than I am. But in fact they <em>are</em> experts: they are people (I am not an epidemiologist) who have spend a great deal of time studying a quite difficult subject and have developed a great deal of skill in and knowledge of that subject. There’s a term for such people, and that term is ‘expert’.</p>
<p>I suspect that Mr Bouquet is just sniping at them because he has a deadline to meet and sniping makes for a nice clever-sounding article which meets his word-count requirements: he knows they are experts (perhaps he even knows that ‘behaviour’ is a fine count noun), but he just enjoys the sniping. It’s not as if, after all, we live in a world where people being dismissive of experts and not listening to what they say is a problem at all, after all, is it? No-one has ever said that a group of experts — for instance climate scientists — are not really experts, and even if they did, well that would not at all be a problem, would it? I’m sure Mr Bouquet could also have a field day with the terminology climate scientists use — what on earth is ‘a forcing’, I mean, how silly.</p>
<p>So, well, perhaps Mr Bouquet should not be writing articles sniping at experts, especially the experts who are trying to keep us all alive. Perhaps he should not be doing his bit to help corrode of the idea of objective truth: perhaps he has not noticed that <a href="https://www.rand.org/research/projects/truth-decay.html">truth decay</a> is a quite serious problem, but it is. Perhaps, even without the part he’s playing in reducing the chances of avoiding the collapse of civilisation that global warming will cause, that’s a deeply offensive and stupid thing to do.</p>
<p>So what Mr Bouquet is doing is not only stupid, it’s offensive: it’s still not encouraging bigotry, is it? Yes, it is.</p>
<p><a href="https://en.m.wikipedia.org/wiki/The_Two_Cultures"><em>The Two Cultures</em></a> was published more than sixty years ago, and yet here is Mr Bouquet casually sniping at scientists as a group. That is, if not directly bigoted, certainly encouraging bigotry towards scientists. Unfortunately that’s only a tiny part of the bigotry which he is encouraging. To see what else he is encouraging take a look at the people that prescriptive grammar pedants — like Mr Bouquet — who snipe at those who ‘don’t use language properly’ tend to do their sniping at. In other words, look at the groups who don’t use the privileged dialect of English that Mr Bouquet probably thinks of as ‘proper English’ but which is in fact just one dialect of the language family.</p>
<p>Those groups include, for instance, black people, gypsies, Scottish people, Northern English people and people who did not go to the right schools. And the various variants of English they use are derided by the one-true-English brigade as ‘degraded’ or ‘simplified’ or less expressive than the one-true-English, because by implication the people in those groups are less clever than the people who speak the one-true-English. They are not less clever and the dialects they speak are not less expressive: saying they are is bigotry, and Mr Bouquet is, very definitely, encouraging that bigotry.</p>
<p>I’m very sure Mr Bouquet is impeccably liberal and would not think for a moment that what he is doing is encouraging bigotry, still less that he is actually being bigoted towards scientists. Perhaps he should stop and think about that for a bit. Perhaps <em>The Observer</em> should stop and think about that for a bit.</p>
<p>He should also learn some linguistics. There are books on it.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2020-11-17-grammar-nazis-and-actual-nazis-footnote-1-definition" class="footnote-definition">
<p>Grammar nazis might want to argue about whether this should be ‘is’ or ‘are’. I don’t know and, frankly, don’t care that much: the rules of grammar are the rules we make. <a href="#2020-11-17-grammar-nazis-and-actual-nazis-footnote-1-return">↩</a></p></li>
<li id="2020-11-17-grammar-nazis-and-actual-nazis-footnote-2-definition" class="footnote-definition">
<p>Also: ‘pluralise’? Seriously. <a href="#2020-11-17-grammar-nazis-and-actual-nazis-footnote-2-return">↩</a></p></li></ol></div>MIME as a disease vectorurn:https-www-tfeb-org:-fragments-2020-08-27-mime-as-a-disease-vector2020-08-27T10:34:35Z2020-08-27T10:34:35ZTim Bradshaw
<p><a href="https://en.wikipedia.org/wiki/MIME">MIME</a>, the Multipurpose Internet Mail Extensions, seems like a good idea: what’s not to like about being able to send arbitrary data by email? In 1996, when I wrote the below, I didn’t think it was.</p>
<!-- more-->
<p>Let’s say there are two computer system vendors:</p>
<ul>
<li><strong>Vendor 1</strong> provides a proprietary OS of high quality and high price, with good quality support. Is committed to ‘open systems; and publishes specifications of its interchange formats for mail, files and so forth. Its interchange formats may be ’high value’ — text with logical rather than visual markup.</li>
<li><strong>Vendor 2</strong> provides a proprietary OS of lower quality but much lower price, with essentially no support. Is completely uninterested in open systems: does not publish its interchange formats, changes them frequently and incompatibly. Its interchange formats may also be ‘low value’ — for instance text with visual rather than logical markup.</li></ul>
<p>Obviously one should buy systems from vendor 1: since purchase and vendor-support cost is rather small compared to the costs caused by low-quality systems this is clearly the right thing to do.</p>
<p>Wrong. Circumstances can easily arise where buying from vendor 2 is the only viable option. This will increase greatly the cost of computing over what it ‘should be’, and will probably ensure that computing systems are of marginal benefit, if any. Even so it is necessary to buy these inferior systems.</p>
<p>How does this happen? The key is data interchange. If the systems of vendor 2 become popular — they are cheap, after all, and they will run on cheap hardware, so they are quite seductive to people who are not costing their systems thoroughly, as well as for home use — and if people who have these systems once start interchanging data — say mail messages using MIME — with the owners of vendor 1 systems, then vendor 1 is doomed.</p>
<p>Vendor 2 system owners will soon start getting mail in formats supported by vendor 1. But these are open standards: vendor 2 can implement displayer and editors for these formats. In fact it’s likely that free versions of these things sill become available. Owners of vendor 2 equipment are happy.</p>
<p>Vendor 1 system owners will start getting mail in formats supported by vendor 2. These are closed, rapidly changing, formats. Vendor 1 has a problem: it has to reverse-engineer the format as it is closed, and as soon as it has done that, vendor 2 changes the format. Even if it can reverse-engineer the formats, the upward conversion from visual markup to logical markup is a hard problem which does not have a general solution.</p>
<p>If vendor 2 systems are common, then it becomes commercially important to owners of vendor 1 equipment to be able to deal with vendor 2’s formats. But vendor 1 cannot keep up with the vendor 2 formats.</p>
<p>The solution is to give up and buy from vendor 2 rather than vendor 1, and use vendor 2’s interchange standards. This will allow you to survive, since you can interchange data with other vendor 2 owners, but will mean that your computing systems are marginally useful, if at all:</p>
<ul>
<li>data is kept in low-value formats so you cannot reuse it;</li>
<li>formats change so old data cannot be used even in vendor 2 systems;</li>
<li>support costs go up as the lower-quality systems provided by vendor 2 break more often, and the poor or nonexistant support from vendor 2 forces local support at great cost.</li></ul>
<p>Of course, vendor 2 needs to be able to force its data formats on people who have vendor 1 systems. This is now easy: computer networks and email are so prevalent that almost anyone has to be able to do interchange with almost anyone else. In particular MIME opens the door: if I’m on a vendor 1 machine, and vendor 1 has implemented MIME in its MUA (after all, vendor 1 is committed to open standards), then I will shortly find vendor 2 documents arriving in my mailbox, and shortly after that I will find myself buying a vendor 2 system.</p>
<p>It’s all a catastrophe.</p>
<hr />
<p>I wrote this in early August 1996: the text above has been converted to markdown from its original HTML but is otherwise essentially unchanged from then. ‘Vendor 1’ was Sun, and ‘vendor 2’ was, of course, Microsoft, wth the low-value interchange format was Word.</p>
<p>I don’t think I was completely right, but I was at least partly so: a lot of really terrible, very low-value data formats have become very prevalent, at least in part because MIME allows them to be easily transmitted.</p>
<p>One thing I didn’t see coming (or saw coming but had not yet accepted) is that the disease spread by MIME would spread to even systems provided at very low or zero up-front cost, such as Linux: if you use OpenOffice or a derivative, you have been infected by the disease.</p>
<p>Another thing that was not obvious was that some of the low-value formats would become effectively standardised, and so would be less toxic. <a href="https://en.wikipedia.org/wiki/Rich_Text_Format">Rich Text Format</a> is perhaps one good example, but even Word’s own native format may now be effectively a standard. This means that writing in these formats, while still very seriously limiting the value of your data, does not lock you in to a vendor as much as it once did.</p>
<p>It is still, however, a catastrophe.</p>Do not use Duplicacy on macOSurn:https-www-tfeb-org:-fragments-2020-08-22-do-not-use-duplicacy-on-macs2020-08-22T10:17:02Z2020-08-22T10:17:02ZTim Bradshaw
<p>Duplicacy is a backup tool. It may possibly have good uses, but if you are using it on a Mac it is probably not actually making backups.</p>
<!-- more-->
<h2 id="the-architecture-of-the-application">The architecture of the application</h2>
<p>The Duplicacy application<sup><a href="#2020-08-22-do-not-use-duplicacy-on-macs-footnote-1-definition" name="2020-08-22-do-not-use-duplicacy-on-macs-footnote-1-return">1</a></sup> on the Mac presents itself as a little web server which you can then talk to (only via <code>localhost</code>, which is good) to configure, run and monitor backups.</p>
<p>What it does behind the scenes is more complicated. Other than some keychain entries (perhaps only one keychain entry) for a master password which is used to encrypt all the other sensitive data, all of its state lives in <code>~/.duplicacy-web</code>. This includes all the configuration, logs and so on and, critically, an executable which is the actual program which runs backups, which lives in <code>~/.duplicacy-web/bin</code> and has a name like <code>duplicacy_osx_x64_2.6.1</code>. The application simply invokes this program to run backups for it. The application will also update this executable when it notices a new one.</p>
<p>This itself is mildly terrifying: where did this executable come from? How safe is it? Can you be sure that the place it comes from will never be compromised? This executable is about to read all your files and copy them somewhere: you probably want to be a bit more sure about it than this.</p>
<p>(This is very different than the case of updating the application itself: this is, or should be, something done under human control. At least in principle you can, and should, check that the thing you have just downloaded actually is what it says it is, and if you don’t, well, that’s a risk you are conciously taking.)</p>
<p>It gets worse: the default configuration of the application will fetch the <em>latest</em> executable, not a <em>stable</em> one (however that is defined), thus maximising the chance that you will be running something that doesn’t work to do your backups, and also maximising the chance that you’ll get a compromised executable. If you are not frightened by now, you will be in a minute.</p>
<h2 id="the-annoyances-of-macos">The annoyances of macOS</h2>
<p>From, I think, 10.14, macOS has developed a complicated and annoying protection system which is completely orthogonal to file permissions. I do not understand this system at all, but it essentially involves various policies about what programs can read and write to what. The intention seems to be that, for instance, some application you install should not be able to read or write personally-sensitive data without your explicit permission, <em>even if the filesystem or other permissions would allow it to do so</em>.</p>
<p>‘Personally-sensitive data’ includes things like your email, your contacts, location information and so on. You can see these permissions in the ‘Privacy’ pane of the ‘Security & Privacy’ entry in ‘System Preferences’ and presumably there is some configuration file somewhere which backs all this, and the <code>tccutil</code> command can be useful as well. The protection system also controls various APIs, such as the one that provides location information.</p>
<p>Although this system is irritating in the usual Apple way, I think it’s well-motivated: my email contains personally-sensitive data about me if no-one else, and I definitely don’t want some random program I run snooping on it, or finding out where I am, without explicitly asking me first.</p>
<p>A place where this protection system really gets in the way is for backup tools. Backup tools <em>really need</em> to be able to, well, make backups, and the most important things they need to back up are often the most sensitive. I <em>really want</em> my backup program to be able to back up my email, for instance, as well as my calendar configuration and so on, and all the other stuff that the macOS protection mechanism would not normally let it read.</p>
<p>So, Apple have thought of this. If you trust some application you can grant it ‘full disk access’ which lets it read (and write, probably) the whole filesystem, only limited by filesystem permissions. This is exactly what you need for a backup program.</p>
<h2 id="the-first-disaster">The first disaster</h2>
<p>So, obviously, when you get Duplicacy, you anoint it suitably in the Privacy pane so that it can have full disk access. (It does not tell you to do this, which is a bad sign in itself.)</p>
<p>This doesn’t work. I think it doesn’t work because the program that is doing the backups is not the Duplicacy application, but this little executable which it downloaded. And, in fact, that’s a <em>good</em> thing: I would really rather not allow an application to secretly download some executable which can read (and write) all my files and send them who-knows-where. It may be that the reason it does not work is that the executable is not signed, although it does appear to be signed, so I am not sure.</p>
<p>In any case, what happens is that the executable fails to read sensitive data and thus fails to back it up. And it dutifully logs this, in <code>~/.duplicacy-web/logs/backup-*.log</code>:</p>
<pre><code>2020-08-21 15:27:40.769 WARN LIST_FAILURE Failed to list subdirectory: open /Users/tfb/Library/Application Support/com.apple.TCC: operation not permitted
2020-08-21 15:27:40.955 WARN LIST_FAILURE Failed to list subdirectory: open /Users/tfb/Library/Calendars: operation not permitted
[...]
2020-08-21 15:27:43.830 WARN LIST_FAILURE Failed to list subdirectory: open /Users/tfb/Library/Containers/com.apple.mail: operation not permitted
[...]
2020-08-21 16:26:53.142 WARN BACKUP_SKIPPED 23 directories and 20 files were not included due to access errors</code></pre>
<p>In other words: the backup worked, partially, but it didn’t succeed in reading some of the the most critical data. If you need to restore from this backup, all your email will be gone.</p>
<p>Well, perhaps you could suitably anoint the downloaded executable? You could do that, if you could work out how to get the Finder to let you see directories whose names have leading <code>.</code>s, which is possible but fiddly. And it would work, for a while, until a new version with a new name appears, and then it will all break again and you’ll have to do it all again.</p>
<p>So that’s a disaster. But it’s not the most serious one.</p>
<h2 id="the-second-disaster">The second disaster</h2>
<p>So, you are configuring this thing via the web interface, like a good person. And you’ve thought to anoint the application so it can read everything, even though at no point did it tell you to do this (unlike other, competently-written, backup tools). And you run backups, and the executable dutifully logs that they failed. <strong>And there is no indication of this, at all in the web interface</strong>, which simply tells you that the backup completed, by which it apparently means ‘the program ran, and after a while it stopped running, and that means everything must be OK’.</p>
<p>In other words: if you are using a recent macOS, then Duplicacy is almost certainly not making good backups for you, and it is certainly not telling you about it when it does not.</p>
<h2 id="dont-use-duplicacy">Don’t use Duplicacy</h2>
<p>I don’t understand how this happened other than that, very clearly, a lot of testing simply was never done. I do understand that it tells you something very, very bad about Duplicacy. I certainly would not, ever, use it on a Mac, and I find it so alarming that I would not in fact use it on any system at all.</p>
<p>Backup tools need to work, because when you need them you <em>really</em> need them. Duplicacy is <em>backup theatre</em>: something that looks like a backup tool but in fact is not.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2020-08-22-do-not-use-duplicacy-on-macs-footnote-1-definition" class="footnote-definition">
<p>This refers to ‘Duplicacy Web Edition’ — there was an older GUI application which I don’t know anything about. <a href="#2020-08-22-do-not-use-duplicacy-on-macs-footnote-1-return">↩</a></p></li></ol></div>Bigger better more expensiveurn:https-www-tfeb-org:-fragments-2020-08-09-bigger-better-more-expensive2020-08-09T12:39:15Z2020-08-09T12:39:15ZTim Bradshaw
<p>A comment to <a href="https://backreaction.blogspot.com/2020/08/really-big-experiments-that-physicists.html">this excellent post by Sabine Hossenfelder</a><sup><a href="#2020-08-09-bigger-better-more-expensive-footnote-1-definition" name="2020-08-09-bigger-better-more-expensive-footnote-1-return">1</a></sup></p>
<p>I think that it’s easy to deride the ‘let’s just do what we’ve already done, but an order of magnitude bigger/better/more expensive’ (BBME). But really it’s more subtle.</p>
<!-- more-->
<p>In particular I think there clearly are times and areas where BBME works really well for quite a long time. I don’t know the history of particle physics experiments well enough, but I bet that between, say, 1945 and whenever LEP was commissioned BBME worked brilliantly: build a BBME synchrotron and huge amounts of new physics poured out and the price was not extortionate. The same is true for gravitational wave detectors — when I was in the process of stupidly abandoning my PhD in GR in the 1980s people pretty much knew that although there was no chance of any detector we could build then seeing (hearing, really) anything, it kind of was the case that some future detector should be able to. And the only real way to build that future detector was to work up through a sequence of, really, technology development systems which would not be adequately sensitive but would let people do the engineering. And we ended up with LIGO / VIRGO & their friends. And you are not going to persuade me that direct detection of gravitational waves was not a prize worth having.</p>
<p>But I think there comes a point where BBME stops working for at least three reasons: you can reach a point where there’s a big hole in the physics so that to see anything really new you need something absurdly bigger so your BBME system just ends up doing the same thing but a bit more accurately; the ME becomes too expensive — when your proposed experiment will use the entire GDP of the planet it is probably too expensive; and finally the spin-off benefits become not interesting enough.</p>
<p>Maybe particle physics is at that point (I suspect strongly it is but I am not a particle physicist). Is gravitational wave astronomy there yet? I don’t think so, but it will get there one day. Are space telescopes, say, there yet? Well it’s tempting to say yes given what JWST has cost, except that JWST is not a BBME Hubble: it’s a different thing altogether, so it doesn’t provide evidence either way. I suspect absurdly large ground-based optical telescopes may be close to or at that point though.</p>
<p>I think humans are also really bad at seeing when BBME has run its course. Hi-Fi is a good example. For a long time BBME worked really well for Hi-Fi — if you’ve listened to Hi-Fi made in the 1950s it is immediately apparent that it is not as good as Hi-Fi made in the 1970s. But then it all hit a wall, in this case because many components became essentially perfect (they got better than the human auditory system). But BBME carried on leading to all the absurdities of speaker cables with all the spins lined up or something we see today.</p>
<p>And of course coming to the end of BBME is awkward for another reason. BBME is typically an exponential process (make one twice as good and four times as expensive every ten years) and exponential processes have nice properties: they are self-similar so that everythig is the same, just scaled up, from one year to the next, and humans don’t like change. There are always enough jobs that each tranche of good students gets employed, for instance. When BBME stops, then suddenly everything is different, and a huge wave of students don’t get employed (and since the exponential process has run for many years, it is a <em>huge</em> wave). Suddenly lots of people who thought they had careers now don’t and lots of courses which turn out those people stop getting students signing up to them. This is a tiny version of the much bigger problem of the end of various other exponential processes which humans have relied upon since there were humans and on which our entire economic system is built. We are completely failing to understand how to negotiate the end of any of these processes without very awkward, and very likely civilisation-ending consequences. Maybe there <em>is</em> no way to end them without these very awkward consequences: just because we need there to be a way to do that does not mean there is. For physics at least the awkward consequences are pretty small unless you are a particle physics PhD student who was expecting a career.</p>
<p>It probably also matters that coming off the end of BBME means that people need to start actually being clever again: brute force and money has failed so cunning and cheap is needed. That’s a big change in thinking. Although I’ve not been able to find details of the total cost, I suspect that the EHT is an example of cunning and cheap: you want to make a telescope as big as the Earth? well, it turns out you can cheat. Hi-Fi provides another example again: could you make, today, Hi-Fi that was actually better rather than just more gold-plated? Yes, it turns out you could, because loudspeakers have really significant levels of distortion (this makes the depleted-uranium–24-bit-speaker-cable-thing even more of a joke than it already is). Current speaker designs are never going to get much better: perhaps there’s a cheap and cunning alternative<sup><a href="#2020-08-09-bigger-better-more-expensive-footnote-2-definition" name="2020-08-09-bigger-better-more-expensive-footnote-2-return">2</a></sup>?</p>
<p>Finally I think it’s worth remembering the spin-off thing. Everyone else thinks that the LHC is doing research into particle physics: I think it’s doing research and development into production-quality large-scale high-power superconducting systems. And given my take that we’re going to need to ship huge amounts of electric power over many thousands of miles (half way around the world in fact) if we’re going to have a medium-term future as a civilisation I’m kind of interested in those. Would a BBME collider further that goal? I don’t know.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2020-08-09-bigger-better-more-expensive-footnote-1-definition" class="footnote-definition">
<p>Believe it or not, this only exists because I could not work out which browser-anti-toxin I needed to turn off to get the thing to understand that I have a Google account, I was signed in to it, and so on. So far we have come. <a href="#2020-08-09-bigger-better-more-expensive-footnote-1-return">↩</a></p></li>
<li id="2020-08-09-bigger-better-more-expensive-footnote-2-definition" class="footnote-definition">
<p>Hint: probably there is. <a href="#2020-08-09-bigger-better-more-expensive-footnote-2-return">↩</a></p></li></ol></div>Golden earsurn:https-www-tfeb-org:-fragments-2020-07-29-golden-ears2020-07-29T12:13:16Z2020-07-29T12:13:16ZTim Bradshaw
<p>Or: Hi-Fi and the death of truth.</p>
<!-- more-->
<p>High fidelity audio is now dominated by people who think they have ‘golden ears’ and are able to hear differences between audio components which have been physically impossible to detect and almost certainly do not exist. Unsurprisingly these people refuse to consider any experiment which could reliably reveal whether they can, in fact, detect any difference. Often the same group of people will make absurd claims for systems which have very audible <em>lack</em> of fidelity — distortion — such as records and valve amplifiers, which shows how far removed they have become from reality<sup><a href="#2020-07-29-golden-ears-footnote-1-definition" name="2020-07-29-golden-ears-footnote-1-return">1</a></sup>.</p>
<p>There are plenty of areas where subjective opinion is the only useful thing: my opinions about some of the guitars I own are extremely subjective, as are my opinions about various movies, bands, books and a huge number of other things<sup><a href="#2020-07-29-golden-ears-footnote-2-definition" name="2020-07-29-golden-ears-footnote-2-return">2</a></sup>. But subjective opinions about things which are objectively measurable either agree with the measurements or they are wrong: if you try to live on the Moon without oxygen you will die within a few minutes, and believing you won’t will not keep you alive if you try: there are no alternative facts, there are only errors and lies.</p>
<p>Not all objective facts can yet be measured: for instance until 2015, while we believed gravitational waves to be a real phenomenon, we only had indirect evidence for them<sup><a href="#2020-07-29-golden-ears-footnote-3-definition" name="2020-07-29-golden-ears-footnote-3-return">3</a></sup>. Perhaps the differences that the golden eared claim to detect between Hi-Fi components are real, but not yet measurable other than by their golden ears. If that was the case then there is still a good way of detecting whether they are real: carefully controlled, sufficiently blinded<sup><a href="#2020-07-29-golden-ears-footnote-4-definition" name="2020-07-29-golden-ears-footnote-4-return">4</a></sup> comparison experiments. If someone claims they can detect a difference between two things then you do a careful experiment which will reveal whether they can, while removing any possible bias due to the subject, the experimenter or anyone else involved. If it turns out that they can, then you know there is something to be measured and you can try and work out what it is. Even if you can’t measure it you know there is <em>some</em> objective truth there.</p>
<p>The golden eared reject such experiments.</p>
<p>The golden eared delusion is not itself significant: who, really, cares if a bunch of rich cranks believe they have magic ears? But the antiscientific thinking which underlies it is very significant. Once someone fails to understand that the differences they ‘hear’ are in their own minds, rejects experiments which could show this then they have decided that they can believe whatever they want to believe about Hi-Fi: there is no objective truth. And if you are free to believe whatever you like about Hi-Fi, why should I not believe whatever I like about vaccination, climate change or how many Jewish people, Roma and others the Nazis killed? Once objective truth is dead, it’s dead: you don’t get to say it’s only dead in specific domains. The golden eared delusion is one small path to the decay of any notion of objective truth itself which we are now seeing.</p>
<p>It’s interesting to understand how the mechanisms of truth decay work: if we understand them perhaps they can be reversed. So it is interesting to try to understand how how and why this particular pathology arose. Fortunately this is pretty easy.</p>
<h2 id="ancient-history">Ancient history</h2>
<p>Initially there was a period during which there was rapid and very audible progress in music reproduction. This started before 1900, and came to an end sometime in the late 1980s, with some tail-off after that. I have a Quad 2 / 22 / FM2 in (now) better-than-new condition and even with a very good source it’s really noticeably worse than a good modern amplifier. Even after amplifiers got good, there was lots of progress in turntable, tonearm and cartridge design right through the 1970s. However even then some early signs of the later pathology appeared, such as S-shaped tonearms which started because of confusion for which spurious justifications were later invented as they looked so pretty.</p>
<p>Throughout this period you could get better Hi-Fi by getting newer Hi-Fi. Manufacturers loved this of course, and a certain kind of person also liked it. My friends and I spent an altogether inordinate amount of time obsessing about Hi-Fi in the late 1970s, and everyone knew someone whose father (inevitably father: Hi-Fi was then and almost certainly still is a male hobby) had some really expensive and beautiful system on which we could listen to the latest Yes album. Some people even probably thought it attracted girls: in my experience, if it did, it didn’t work nearly as well as guitars, motorbikes or just being a decent human being did.</p>
<h2 id="the-wall">The wall</h2>
<p>But then it all gradually hit the wall. Successively various components of systems got sufficiently good that, while they could still get better, they could no longer get <em>perceptibly</em> better. Loudspeakers were the only significant exception: see below for more about this.</p>
<p>During and after this event something related also happened: very portable audio reproduction systems arrived and became pervasive. Almost everyone who wanted to listen to music started using these systems rather than enormous Hi-Fi setups. Their quality was at first very poor, and any system based on in-ear headphones has limitations even now, but for people whose goal was to listen to music rather than to play with toys, they were more than adequate.</p>
<p>These two changes meant that large numbers of people who would formerly have bought Hi-Fi systems to listen to music bought portable systems instead., and people who still wanted to listen to music on a traditional Hi-Fi and had formerly upgraded their systems regularly to improve the quality largely stopped doing so as there became no reason in terms of quality of sound reproduction to replace components. This was very bad news for the Hi-Fi industry.</p>
<p>Well, the inevitable happened: many Hi-Fi companies went out of business. I have no idea how many Hi-Fi companies there now are compared to 1980 or how their financial value compares, but fewer, and less.</p>
<p>Some companies survived by simply servicing the relatively small remaining market as new generations of customers appeared: not everyone wants to use their parents’ or, more likely, grandparents’ Hi-Fi and in any case electronic components don’t last for ever.</p>
<p>Some companies realised that, for many people who are interested in Hi-Fi only part of their interest is in achieving the best possible sound reproduction: a very significant reason they like Hi-Fi is because it can be beautiful. It is undeniable that a well-engineered turntable or valve amplifier is a very lovely machine in much the same way that a vintage sports car or a 1950s jukebox is. There is absolutely nothing wrong with the desire to own beautiful things and very well-engineered things are often beautiful. A reason which is, I think, rather less good than the desire to own beautiful things is the desire to own status symbols: very expensive objects which will be recognised as such by other people. So some companies started to produce Hi-Fi which was explicitly designed to be appealing in this way.</p>
<p>Some companies — very often the same ones who were producing very beautiful Hi-Fi — made a business of being willing to maintain their own products for a very long time: when you bought something from them you knew it could be repaired almost indefinitely and you were therefore willing to pay a high price for it<sup><a href="#2020-07-29-golden-ears-footnote-5-definition" name="2020-07-29-golden-ears-footnote-5-return">5</a></sup>.</p>
<h2 id="a-way-under-the-wall">A way under the wall</h2>
<p>Other, less honest, companies realised they could exploit four secrets that had been lurking, usually slightly below the surface, in the minds of their customers since the beginning.</p>
<p><strong>The first secret</strong> is that quite a lot of people suffer from what is called ‘gear acquisition syndrome’ — GAS for short. GAS was first described among musicians and it involves thinking that ‘if only I had a better (guitar, amp, effects pedal, …) I would be able to play much better and would become the hugely successful rock star I know I could be.’</p>
<p>In other words, GAS makes you think that what’s stopping you being a great guitarist is not lack of talent or unwillingness to practice, but <em>lack of the right gear</em>. GAS is a form of <em>displacement activity</em> where instead of dealing with the real problem — that you’re not very good, that you don’t practice — you spend endless ours obsessing over what gear to buy and in fact buying gear. And GAS doesn’t stop: once you have the expensive guitar and you still can’t play like Jimmy Page, well, it must be because the modern ones aren’t up to much — you need a 50s or 60s one. And the tape echo simulator you have isn’t good enough — nothing but a real tape echo (valve, not solid state mind you, the solid state ones where never up to much<sup><a href="#2020-07-29-golden-ears-footnote-6-definition" name="2020-07-29-golden-ears-footnote-6-return">6</a></sup>) will do. And on it goes, endlessly, eating money and consuming the time you should spend practicing in looking at adverts & reading reviews.</p>
<p>GAS applies to Hi-Fi as well: rather than just <em>listening to music</em> people start obsessing that it would all sound much better if only they had better Hi-Fi. And it doesn’t matter whether it actually would, or even if it <em>could</em> sound any better: GAS is still driving you to buy more, ‘better’ Hi-Fi<sup><a href="#2020-07-29-golden-ears-footnote-7-definition" name="2020-07-29-golden-ears-footnote-7-return">7</a></sup>, spending time on that which would be better spent listening to more music.</p>
<p><strong>The second secret</strong> is that people like to think that they are special. Everyone likes to think that they are somehow gifted: one of the hard things that happens to almost everyone as they grow up is realising that, in fact, they are pretty much the same as everyone else. Some very few people really are gifted: Mozart was gifted, Einstein was gifted, Picasso was gifted, Jimi Hendrix was gifted. But most of us have more-or-less the same gifts as everyone else. And this is obvious really: if everyone is gifted, or most people are gifted, then, well, those gifts are just ordinary.</p>
<p>Most people learn this truth eventually, but no one enjoys learning it and some people don’t learn it at all. In some cases this failure to learn is very toxic: Donald Trump and Dominic Cummings are current examples. Even people who understand that they are not, in fact, special are susceptible to suggestions that they are. This is why rock stars, dot.com billionaires and film stars are such horrible people: they spend their lives surrounded by sycophants who are endlessly telling them how special and important they are. Almost everyone will eventually break under the pressure of such flattery: even if they didn’t start by believing they were, well, a bit special, they will end up doing so after enough people have said they are.</p>
<p>So if you tell someone, often enough, that they have the special gift of golden ears then, unless they have a very deep-rooted understanding of why they don’t, in fact, have golden ears — why golden ears can’t exist — some of them will start to believe they have. Because they’re special, and they have special ears: of course they do.</p>
<p><strong>The third secret</strong> is that human sensory perception is both unreliable and subject to bias. There are very many examples of this: many of the most famous ones being <a href="https://en.wikipedia.org/wiki/Optical_illusion" title="Optical illusion">optical illusions</a> of various kinds. A good example is the <a href="https://en.wikipedia.org/wiki/Checker_shadow_illusion" title="Chequer shadow illusion">chequer shadow illusion</a>, in which two areas which are in fact identical shades of grey appear to be different shades. Another good recent example is the famous viral <a href="https://en.wikipedia.org/wiki/The_dress" title="The dress">dress</a> phenomenon from 2015, in which different people perceive the same dress as being coloured either black and blue or white and gold. There are many, many others: human visual perception is clearly simply not reliable.</p>
<p>But all these examples are <em>optical</em> illusions: perhaps hearing is special and is reliable and not subject to bias? This would be extremely surprising, but without evidence it can’t be ruled out as a possibility. Well, there is lots of evidence. A famous example is the <a href="https://en.wikipedia.org/wiki/McGurk_effect" title="McGurk effect">McGurk effect</a>: this is something that will be familiar to anyone who has watched dubbed films, or films in which the audio is not completely in sync. What happens is that, if you are watching someone speak then they actually produce one phoneme but the sound corresponds to a different phoneme, you can end up hearing a phoneme which is neither the one that they really said nor the one that the sound corresponded to but some third phoneme. If someone says /ga-ga/ but the sound of /ba-ba/ is played over it you can end up hearing /da-da/, for instance. <a href="https://en.wikipedia.org/wiki/Shepard_tone" title="Shepard tones">Shepard tones</a>, which seem to endlessly rise in pitch are another example<sup><a href="#2020-07-29-golden-ears-footnote-8-definition" name="2020-07-29-golden-ears-footnote-8-return">8</a></sup>. There are <a href="https://en.wikipedia.org/wiki/Auditory_illusion" title="auditory illusions">many more</a> examples of auditory illusions.</p>
<p>Perhaps the most well-known auditory illusion of all is <em>stereo</em>. When you listen to a correctly set-up stereo speaker system and a well-engineered recording, the impression that sounds come from instruments at well-define points between and often much further away than the loudspeakers<sup><a href="#2020-07-29-golden-ears-footnote-9-definition" name="2020-07-29-golden-ears-footnote-9-return">9</a></sup> is compelling for most people. But no sounds are coming from those points: your senses are fooling you.</p>
<p>None of this should be surprising: as our senses did not evolve to provide reliable, repeatable information free from bias: they evolved to make sure that we heard some terrible monster approaching from behind before it ate us. And if, occasionally, we hear monsters which are not there then that’s a lot better than being eaten. What would be surprising, in fact, would be if our sensory system <em>was</em> reliable.</p>
<p><strong>The fourth and greatest secret</strong> is that people want to believe in magic. I’m sure there are some people who want to believe only in the things that science can explain, but almost everyone wants to think that, somewhere, there are elves and dragons, water-spirits and wizards. In 2003, the BBC <a href="https://en.wikipedia.org/wiki/The_Big_Read" title="The Big Read">conducted a survey</a> to find the best-loved books in the UK. Of the top ten books, six involved magic of some kind; of the top five, four did. The best-loved book of all was <em>The Lord of the Rings</em>, which has also been found to be the best-loved book in Australia, Germany and the US, and has sold more than 150 million copies: about one copy for every 50 people now alive.</p>
<p>Almost no-one wants the world to be a place where there is no magic of some kind: even people who ‘don’t believe in magic’ want to believe in things like faster-than-light travel and time travel which are magic dressed-up as science<sup><a href="#2020-07-29-golden-ears-footnote-10-definition" name="2020-07-29-golden-ears-footnote-10-return">10</a></sup>. I want to believe in magic: wouldn’t the world be a better place if there <em>were</em> elves, river gods and goddesses and magical objects? Of course it would.</p>
<p>And this desire for there to be magic runs pretty deep. Perhaps we can’t have elves and genii locorum (or, perhaps, we can, somewhere just out of sight), but can’t we still have magical objects? I <em>know</em> that there are no magical objects, but still I own a beautiful valve compressor using a ‘new old stock’ military valve. Is it better than a really good digital compressor? Certainly it is not, but I want to believe it is. And I <em>know</em> that my beautiful ES–175 is just machine made of wood, metal and bone and not even a particularly good example of one, and that it’s easily replaceable. But I would risk my life to rescue it in a fire, because some part of me believes that it is made of wood, metal, bone <em>and magic</em>.</p>
<p>The four secrets are:</p>
<ul>
<li>the human desire for an endless succession of better objects (GAS);</li>
<li>the desire of humans to believe that they are special and have special abilities not granted to other people;</li>
<li>the unreliability of human sensory perception and the ability to bias that perception;</li>
<li>the human desire to believe in magic, and particularly magical objects.</li></ul>
<p>Now it is easy to see how these can be exploited by unscrupulous Hi-Fi companies:</p>
<ul>
<li>offer an endless succession of ‘better’ Hi-Fi components, thus satisfying customers’ GAS;</li>
<li>persuade customers that they are special and have golden ears able to hear the differences in sound that ordinary, lesser, humans can not hear;</li>
<li>rely on the unreliability of human auditory perception together with poor or no experimental controls to do this;</li>
<li>provide components which purport to be magic in all but name — special speaker cables, special capacitors, turntable plinths made of lignum vitae, Hi-Fi components made long ago which are purported to have magic properties and so on.</li></ul>
<p>And, for some Hi-Fi companies this has worked very well indeed. The market is necessarily fairly small because magic objects don’t come cheap and, well, if the ordinary people had access to them the argument that the people they were selling to have golden ears would fail.</p>
<p>And, well, why does it matter? It’s obvious why it happens — companies need to stay in business, customers need to justify GAS, believe they are special and that magic exists, and lack of experimental discipline can be used to achieved this. And it’s obvious that the customers are gullible fools: but it’s their money, why should anyone else care?</p>
<h2 id="the-death-of-truth">The death of truth</h2>
<p>They should care. They should care because people are not compartmentalised. People who believe that they have special powers in one area tend to believe they have special powers in other areas. That makes them deeply unpleasant people — it’s hard enough dealing with the arrogance of people who really <em>are</em> gifted: dealing with the arrogance of people who only <em>think</em> they are gifted is a horrible experience.</p>
<p>But that is only a tiny part of the problem: people who don’t accept properly-controlled experiments in one area will be less likely to accept them in other areas; people who think that they can ignore scientific method in one area will tend to ignore it in other areas.</p>
<p>But you don’t get to pick and choose: in the areas where science works, <em>it works</em>, and if you say it does not work in an area where it applies what you are saying is that, well, you get to choose when to believe what it tells you or not based on what you want to be true. The end result of this is that people will start to think that they can just get to choose what they think is true as suits them: if it is convenient to them for something to be true, well then it is true, if it’s not convenient, then it is false, even if it’s the same thing that was true yesterday.</p>
<p>Once you open Pandora’s box, then you open Pandora’s box: what comes out is whatever is in it, but also <em>everything</em> that is in it. You don’t get to say you only want some of the contents.</p>
<p>And we have opened Pandora’s box: we live in a world of alternative facts and made-up truths, a world where the very notion of truth is in the process of being destroyed. Many of us are now ruled by people who think that truth is whatever is convenient to them this week, because many of <em>us</em> think that truth is whatever is convenient to us this week. Truth is dead.</p>
<p>But it’s not: there is, in fact, a world outside your head and that world does not care what you think is true or whether you think you have special golden ears. That world cares only about what <em>is</em> true. A virus does not care what you think: it cares only about what is true. The physics, chemistry and biology of the planet’s climate does not care what you think: it cares only about what is true. The virus will kill you whether or not you think it can, and the consequences of what we are doing to the climate will kill billions of humans no matter how hard they pretend it won’t. There is no magic, there is only truth, and the only way to discover that truth is carefully controlled experiment.</p>
<p>The fabric of myths and lies on which modern Hi-Fi is built are a small part of the death of truth, but they are a part of it. If you think you have golden ears while conveniently choosing to reject any experiment which could tell if you really have, if you believe in the magic properties of certain components, or if you are merely involved in selling these myths and lies to people who believe then you are partly responsible for this catastrophe. And I will not forgive you.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2020-07-29-golden-ears-footnote-1-definition" class="footnote-definition">
<p>I am personally very fond of the sound of records and of valve equipment, and have constructed a valve hifi amplifier as well as owning another. But I am fond of them because I <em>enjoy</em> the lack of fidelity they introduce. I do not pretend that either thing provides particularly accurate reproduction. <a href="#2020-07-29-golden-ears-footnote-1-return">↩</a></p></li>
<li id="2020-07-29-golden-ears-footnote-2-definition" class="footnote-definition">
<p>Somewhere, some reductionist AI person is saying ‘but these can be reduced to objective facts about the state of the brain’. Well, yes, but ‘reducing’ something to the state of a system which is so complex no human can understand it in any detail (this is obvious: if a human brain has enough state to store a complete copy of another human brain then, by recursing, it is clear that it must have an infinitely large state space) is not reducing it to anything objective in any useful sense. <a href="#2020-07-29-golden-ears-footnote-2-return">↩</a></p></li>
<li id="2020-07-29-golden-ears-footnote-3-definition" class="footnote-definition">
<p>The <a href="https://en.wikipedia.org/wiki/First_observation_of_gravitational_waves">first direct observation of gravitational waves</a> was made on the 14th September, 2015, although not published until 11th February 2016. <a href="#2020-07-29-golden-ears-footnote-3-return">↩</a></p></li>
<li id="2020-07-29-golden-ears-footnote-4-definition" class="footnote-definition">
<p>These experiments are normally called ‘double blind’ since neither the subject not the experimenter knows enough to bias the result, consciously or not: they are both blinded. I prefer the term ‘sufficiently blinded’, which covers experiments where there may be more than two parties involved. What I mean by ‘sufficiently blinded’ is that no-one involved has information in advance which would allow them to bias the outcome. <a href="#2020-07-29-golden-ears-footnote-4-return">↩</a></p></li>
<li id="2020-07-29-golden-ears-footnote-5-definition" class="footnote-definition">
<p>For a long time Quad — formerly the Acoustical Manufacturing Company — made a business doing this: they may still. These two ideas — producing objects which are very beautiful, which serve, emphatically, as status symbols and being willing to maintain them indefinitely — has of famously been exploited in the photography field by Leica <a href="#2020-07-29-golden-ears-footnote-5-return">↩</a></p></li>
<li id="2020-07-29-golden-ears-footnote-6-definition" class="footnote-definition">
<p>Related to GAS is the idea of some lost golden age in which everything sounded better — somehow, the sound achieved with a specific model of tale echo, or a particular studio compressor made in tiny numbers in the late 1960s has never been equalled. It is not acceptable to consider that the sound achieved on records using the magic equipment might have been more due to the genius of the sound engineer than the equipment, because that would mean that <em>your</em> recordings sound bad because you are not very good, which can never be considered. <a href="#2020-07-29-golden-ears-footnote-6-return">↩</a></p></li>
<li id="2020-07-29-golden-ears-footnote-7-definition" class="footnote-definition">
<p>GAS is, in some ways, a malignant version of the ‘beautiful object’ motivation: the never-ending sequence of guitars, amplifiers and effects pedals that a musician with GAS buys are ever more exotic and beautiful, as is the never-ending sequence of Hi-Fi components that a person with GAS acquires. Of course, in both cases, the sufferer continues to fool themself that this is not what is motivating them. Again, Leica is a good parallel: if you are thinking of spending more than ten thousand pounds on a camera and lens and you think that it will make you a better photographer the you are a fool; it would be better to admit that you ate doing it because you want the beautiful object or the status symbol. The sufferer from GAS either does not recognise this or truly believes that it is not the case. <a href="#2020-07-29-golden-ears-footnote-7-return">↩</a></p></li>
<li id="2020-07-29-golden-ears-footnote-8-definition" class="footnote-definition">
<p>This is related to a device sometimes used in electric music called a ‘barberpole phaser’ which gives the impression of an endlessly upward or downward sweeping <a href="https://en.wikipedia.org/wiki/Phaser_(effect)" title="Phaser">phaser</a>. The same effect can also be achieved with flanging. <a href="#2020-07-29-golden-ears-footnote-8-return">↩</a></p></li>
<li id="2020-07-29-golden-ears-footnote-9-definition" class="footnote-definition">
<p>I find the stereo illusion works very much less well with headphones. But, well, it’s an illusion: perhaps other people hear the illusion much better than I do with headphones. <a href="#2020-07-29-golden-ears-footnote-9-return">↩</a></p></li>
<li id="2020-07-29-golden-ears-footnote-10-definition" class="footnote-definition">
<p>Almost certainly: FTL travel translates directly into causality violation, which is extremely bad news. <a href="#2020-07-29-golden-ears-footnote-10-return">↩</a></p></li></ol></div>The glorious work of Dominic Cummingsurn:https-www-tfeb-org:-fragments-2020-06-02-the-glorious-work-of-dominic-cummings2020-06-02T16:59:52Z2020-06-02T16:59:52ZTim Bradshaw
<p>Or: the Cummings-Johnson effect.</p>
<p>I thought it would be interesting to get an idea of how many people will die because Dominic Cummings thought it was fine to ignore the lockdown rules, and Boris Johnson agreed with him. So I wrote a program to explore this Cummings-Johnson effect.</p>
<!-- more-->
<h2 id="all-the-reasons-you-had-to-die">All the reasons you had to die</h2>
<blockquote>
<p><em>Jesus don’t want me for a sunbeam,</em>
<br /><em>because sunbeams are not made like me,</em>
<br /><em>and don’t expect me to cry,</em>
<br /><em>for all the reasons you had to die,</em>
<br /><em>don’t ever ask your love of me.</em></p></blockquote>
<p>There are two ways that what Cummings did in March 2020 will probably be killing people:</p>
<ul>
<li>he drove a long distance, presumably taking breaks<sup><a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-1-definition" name="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-1-return">1</a></sup>, while knowing he was infected with CV19;</li>
<li>now his actions are known, and now Johnson has supported them, other people’s behaviour will change.</li></ul>
<p>The first of these is likely to have killed people, and still be killing people, by spreading the virus: for instance to the toilets in service stations<sup><a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-2-definition" name="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-2-return">2</a></sup>. The second of these is likely to kill people, and perhaps has done so already, because now it is general knowledge that Cummings & Johnson think that lockdown rules are for other people — for the little people, not people like them — then they will take lockdown and social distancing less seriously, and people will die as a result of that.</p>
<p>It’s this second way that they are killing people that I looked at.</p>
<p><strong>Ths simulator described below is a toy</strong>: it’s very much a physicist’s ‘spherical cow’ model. It has no notion of locality for instance: infected individuals simply randomly pick other individuals to try to infect. The results it gives may be qualitatively reasonable, but if they are quantitively correct this is coincidence. The purpose of writing it, and of the runs described here, was simply to see if the Cummings-Johnson effect is visible, and to get some kind of qualitative estimate of how large it might be: if their actions will probably only kill only a few tens of people then they are doing no more harm than a common-or-garden mass murderers, while if their actions may kill thousands of people, then they’re working on a completely different scale.</p>
<p>Epidemic models which are far better than this exist. For instance the <a href="https://www.imperial.ac.uk/mrc-global-infectious-disease-analysis/">MRC Centre for Global Infectious Disease Analysis</a> — Professor Neil Ferguson’s group — must have one. I would be very surprised if these people haven’t run much better versions of the scenarious I describe below. But the results of these runs don’t seem to have been published. This is sad but, perhaps, not surprising given what we know about Cummings & Johnson and their attitudes to facts which disagree with their fantasy worlds.</p>
<p>Still: if there are results from better models I would very much like to know them.</p>
<h2 id="a-mindless-epidemic-simulator">A mindless epidemic simulator</h2>
<p>I wrote a very simple-minded simulator: it is unlikely to be realistic, it’s really a toy model. The results are unlikely to be quantitively correct, but they may be qualitatively interesting. In the model individuals go through the standard three phases:</p>
<ul>
<li>initially they are uninfected & hence susceptible;</li>
<li>once they are infected they incubate the disease for \(t_l\) days, where \(t_l = 7\) in all the runs below</li>
<li>they are then infectious for \(t_i = 14\) days.</li>
<li>on each of these days, they randomly pick another individual, and if that individual is susceptible they infect them with a probability which is initially \(p_i = 0.14\).</li>
<li>at the end of the period they either die, with probability \(p_d = 0.01\), or they survive but become non-susceptible.</li></ul>
<p>Additionally there may be a small ‘leakage’: every day, every susceptible person in the population can stand a small chance of becoming infected. This models the infection leaking in from abroad, for instance. In all the runs here the leakage \(p_l = 10^{-8}\).</p>
<p>Finally the initial number of seeds can be set, the idea being to start the simulation after a good few people have become infected to avoid too much uncertainty in the trajectory of the epidemic. By default \(n_s = n_p/1000\), where \(n_s\) is the number of seeds and \(n_p\) is the population size.</p>
<p>All of the parameters are adjustable as is how long to run for and what the stopping criteria are (with a leaky model things can keep on happening even after the number of infectious individuals reaches zero).</p>
<p>It is straightforward to computee \(R_0\) for this model: a person is infectious for \(t_i\) days and each day they stand a \(p_i\) chance of infecting another person if no-one is yet infected, so</p>
<p>\[
\begin{align}
R_0 &= p_i t_i\\
&= 0.14 \times 14\\
&= 1.96
\end{align}
\]</p>
<p>And then \(R\) declines over time as more people are removed from the population. When \(R < 1\) the epidemic dies out, more-or-less gradually, except for leaks causing occasional infections.</p>
<p>Source code for this model is not currently available, although it may be in future.</p>
<h2 id="how-the-simulations-run">How the simulations run</h2>
<p>All of \(t_l\), \(t_i\), \(p_i\), \(p_d\) and \(p_l\) can be adjusted during a run: the simulator is told to run for a few days, the values can then be adjusted and then it runs again for some given time. In practice the only parameter that I adjusted was \(p_i\): the probability of infection. Changing this during the run directly changes \(R_0\) and hence \(R\) and alters the course of the epidemic.</p>
<p>There is nothing in the model which prevents any of these parameters being adjusted <em>dynamically</em>, based on the current behaviour of the modelled epidemic. In fact I didn’t do that but instead set up ‘configuration sequences’ which are sequences of configurations where the parameters (in practice, just \(p_i\), as well as some reporting parameters) are changed at fixed times, between which the model simply runs.</p>
<p>Because there is inevitable variation between runs, the simulations get run several times, and the model also <em>forks</em>: if I wanted to look at the effect of changing parameters on, say, \(d = 120\), a single simulation is run to \(d = 119\) and then multiple copies are run on from then. This means that any variation before \(d = 120\) is removed from the forks, since they all come from the same simulation run. This process can happen recursively if need be.</p>
<h2 id="some-example-runs">Some example runs</h2>
<p>Here are some simple cases which show the behaviour of the model.</p>
<h3 id="abandoning-mitigation">Abandoning mitigation</h3>
<p>Here is output for a model epidemic in which the mitigation is abandoned after 2 years:</p>
<div class="figure"><img src="/fragments/img/2020/cummings-johnson/mitigated-giving-up-20200603-1M.svg" alt="Mitigated giving up after 2 years, cumulative deaths, population of 1 million" />
<p class="caption">Mitigated giving up after 2 years, cumulative deaths, population of 1 million</p></div>
<p>This is the output of a 4 year run for a population of a model with</p>
<ul>
<li>\(n_p = 10^6\);</li>
<li>\(p_i = 0.14\) initially;</li>
<li>\(p_l = 10^{-8}\)</li></ul>
<p>For the unmitigated forks, \(p_i\) remains at its initial value.</p>
<p>For the completely mitigated forks</p>
<p>\[
p_i = \begin{cases}
0.14&d \lt 40\\
0.06&120 \le d \lt 120\\
0.08&120 \le d \lt 200\\
0.06&d \ge 200
\end{cases}
\]</p>
<p>For the ‘giving up’ forks</p>
<p>\[
p_i = \begin{cases}
0.14&d < 40\\
0.06&120 \le d \lt 20\\
0.08&120 \le d \lt 200\\
0.06&200 \le d \lt 730\\
0.14&d \ge 730
\end{cases}
\]</p>
<p>In other words what this is showing is a scenario where there is no vaccine, but mitigation is abandoned, after about 2 years. Because some leakage happens, at some point after the mitigation is abandoned the epidemic takes off again and a lot of people die. Exactly when it takes off depends on chance, but in all 5 runs here it’s within about a year and a half.</p>
<p>Scaling the average results from this run to a population of 70 million<sup><a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-3-definition" name="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-3-return">3</a></sup> results in the following figures, all to 3 significant figures:</p>
<ul>
<li>551,000 deaths for the unmitigated epidemic;</li>
<li>40,300 deaths for the completely mitigated epidemic;</li>
<li>535,000 deaths for the epidemic in which mitigation is abandoned on day 730.</li></ul>
<p>For the mitigated epidemic this is somewhat lower than what the UK has so far seen, but it is in the right area: the model is clearly not hopeless. In later runs I adjusted the mitigation slightly to compensate for this (see below).</p>
<p>What these results make clear is that, unless there is a vaccine<sup><a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-3-definition" name="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-3-return">3</a></sup>, mitigation has to continue essentially indefinitely, or the epidemic will take off again.</p>
<h3 id="chancy-runaways">Chancy runaways</h3>
<p>Here are two runs which have an initial infected population, \(n_s = 0\): there are initially no infected people and the epidemic takes off due to leakage, with \(p_l = 10^{-8}\) as before.</p>
<p>Firstly for a population of a million:</p>
<div class="figure"><img src="/fragments/img/2020/cummings-johnson/chancy-runaway-20200602-1M-10ppb.svg" alt="Unmitigated, no seeds, cumulative deaths, population of 1 million, 10 runs" />
<p class="caption">Unmitigated, no seeds, cumulative deaths, population of 1 million, 10 runs</p></div>
<p>Well, you can see that the epidemic takes off again after less than two years in all cases.</p>
<p>How likely this runaway is to happen in a given interval of time depends on the population size, as smaller populations experience fewer leakage events. Here is a run for a population of 10,000:</p>
<div class="figure"><img src="/fragments/img/2020/cummings-johnson/chancy-runaway-20200602-10k-10ppb.svg" alt="Unmitigated, no seeds, cumulative deaths, population of 10k, 10 runs" />
<p class="caption">Unmitigated, no seeds, cumulative deaths, population of 10k, 10 runs</p></div>
<p>You can see that only one runaway happened in the three year simulation.</p>
<h2 id="the-cummings-johnson-effect">The Cummings-Johnson effect</h2>
<p>To model this I started with an epidemic whose \(p_i\) values are initially:</p>
<p>\[
p_i = \begin{cases}
0.14&d < 40\\
0.06&40 \le d <120\\
0.08&120 \le d < 200\\
0.06&200 \le d < 300\\
0.08&300 \le d < 600\\
0.07&d \ge 600
\end{cases}
\]</p>
<p>All of the models run for 3 years, or 1095 days, and in addition the unmitigated epidemic is always plotted<sup><a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-4-definition" name="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-4-return">4</a></sup>. Each model ran 5 times and quoted figures are averages, scaled to a population of 70 million, to 3 significant figures</p>
<h3 id="cummings-johnson-on-day-120">Cummings-Johnson on day 120</h3>
<p>For this model</p>
<p>\[
p_i = \begin{cases}
0.14&d < 40\\
0.06&40 \le d < 120\\
0.08\times
\left\{1.02, 1.05, 1.10\right\}
&120 \le d < 200\\
0.06\times
\left\{1.01, 1.03, 1.06\right\}
&200\le d < 300\\
0.08\times
\left\{1.005, 1.02, 1.04\right\}
&300 \le d < 600\\
0.07\times
\left\{1.002, 1.01, 1.02\right\}
&d \ge 600
\end{cases}
\]</p>
<p>Where the triples of numbers represent the Cummings-Johnson effect causing weakening of social distancing of 2%, 5% and 10% respectively on day 120, with the weakening declining over time. Here are plots for this:</p>
<div class="figure"><img src="/fragments/img/2020/cummings-johnson/cummings-20200603-1M.svg" alt="Cummings-Johnson on day 120, 2%, 5% and 10%, population of 1 million" />
<p class="caption">Cummings-Johnson on day 120, 2%, 5% and 10%, population of 1 million</p></div>
<p>Here:</p>
<ul>
<li>the brown curves are the normal courses of the epidemic with and without mitigation;</li>
<li>the blue curves are 2%;</li>
<li>the orange curves are 5%;</li>
<li>the red curves are 10%;</li></ul>
<p>The figures are:</p>
<ul>
<li>551,000 deaths for the unmitigated epidemic;</li>
<li>63,100 deaths for the mitigated epidemic;</li>
<li>70,300 death for the 2% weakening;</li>
<li>86,500 deaths for the 5% weakening;</li>
<li>109,000 deaths for the 10% weakening.</li></ul>
<p>Or in other words:</p>
<ul>
<li>7,200 additional deaths for 2% weakening;</li>
<li>32,400 additional deaths for 5% weakening;</li>
<li>45,900 additional deaths for 10% weakening.</li></ul>
<p>These numbers seemed far too high to me. And I also suspect that the epidemic in my model happens more slowly (takes more simulated days) than the real one. So I ran three more models, with the Cummings-Johnson effect taking place at successively later times.</p>
<h3 id="cummings-johnson-on-day-200">Cummings-Johnson on day 200</h3>
<p>For this model</p>
<p>\[
p_i = \begin{cases}
0.14&d < 40\\
0.06&40 \le d < 120\\
0.08&120\le d < 200\\
0.06\times
\left\{1.02, 1.05, 1.10\right\}
&200\le d < 300\\
0.08\times
\left\{1.01, 1.03, 1.06\right\}
&300 \le d < 600\\
0.07\times
\left\{1.005, 1.02, 1.04\right\}
&d \ge 600
\end{cases}
\]</p>
<p>As you can see this allows the mitigated epidemic to run until day 200, when the same decaying effect happens. Here are plots for this:</p>
<div class="figure"><img src="/fragments/img/2020/cummings-johnson/cummings-later-20200603-1M.svg" alt="Cummings-Johnson on day 200, 2%, 5% and 10%, population of 1 million" />
<p class="caption">Cummings-Johnson on day 200, 2%, 5% and 10%, population of 1 million</p></div>
<p>Figures:</p>
<ul>
<li>546,000 deaths unmitigated;</li>
<li>69,900 deaths mitigated;</li>
<li>75,100 deaths 2%;</li>
<li>93,700 deaths 5%;</li>
<li>128,700 deaths 10%.</li></ul>
<p>Excess deaths:</p>
<ul>
<li>5,200 2%;</li>
<li>18,600 5%;</li>
<li>53,600 10%.</li></ul>
<p>This is a little better, but not much, and the 10% case is bizarrely bad.</p>
<h3 id="cummings-johnson-on-day-300">Cummings-Johnson on day 300</h3>
<p>For this model</p>
<p>\[
p_i = \begin{cases}
0.14&d < 40\\
0.06&40 \le d < 120\\
0.08&120\le d < 200\\
0.06&200\le d < 300\\
0.08\times
\left\{1.02, 1.05, 1.10\right\}
&300 \le d < 600\\
0.07\times
\left\{1.01, 1.025, 1.05\right\}
&d \ge 600
\end{cases}
\]</p>
<p>Here are plots for this:</p>
<div class="figure"><img src="/fragments/img/2020/cummings-johnson/cummings-even-later-20200603-1M.svg" alt="Cummings-Johnson on day 300, 2%, 5% and 10%, population of 1 million" />
<p class="caption">Cummings-Johnson on day 300, 2%, 5% and 10%, population of 1 million</p></div>
<p>Figures:</p>
<ul>
<li>551,000 deaths unmitigated;</li>
<li>59,800 deaths mitigated;</li>
<li>73,200 deaths 2%;</li>
<li>90,000 deaths 5%;</li>
<li>138,000 deaths 10%.</li></ul>
<p>Excess deaths:</p>
<ul>
<li>13,400 2%;</li>
<li>30,200 5%;</li>
<li>78,200 10%.</li></ul>
<p>All these figures are <em>worse</em> than the day 200 case, which think is because the big increase is happening when things are already too relaxed.</p>
<h3 id="cummings-johnson-on-day-600">Cummings-Johnson on day 600</h3>
<p>For this model</p>
<p>\[
p_i = \begin{cases}
0.14&d < 40\\
0.06&40 \le d < 120\\
0.08&120\le d < 200\\
0.06&200\le d < 300\\
0.08&300 \le d < 600\\
0.07\times
\left\{1.02, 1.05, 1.10\right\}
&d \ge 600
\end{cases}
\]</p>
<p>Here are plots for this:</p>
<div class="figure"><img src="/fragments/img/2020/cummings-johnson/cummings-really-late-20200603-1M.svg" alt="Cummings-Johnson on day 600, 2%, 5% and 10%, population of 1 million" />
<p class="caption">Cummings-Johnson on day 600, 2%, 5% and 10%, population of 1 million</p></div>
<p>Figures:</p>
<ul>
<li>546,000 deaths unmitigated;</li>
<li>61,700 deaths mitigated;</li>
<li>63,600 deaths 2%;</li>
<li>68,500 deaths 5%;</li>
<li>80,200 deaths 10%.</li></ul>
<p>Excess deaths:</p>
<ul>
<li>1,900 2%;</li>
<li>6,800 5%;</li>
<li>18,500 10%.</li></ul>
<p>These seem a little less frightening</p>
<h3 id="why-is-it-so-fierce">Why is it so fierce?</h3>
<p>I was really surprised by how large the differences are. I think part of the answer can be seen by looking at \(R\): at any point the progress of the epidemic goes something like \(e^{\alpha (R -1)t}\), where \(\alpha\) is some fudge factor. The only reason that the exponential runaway doesn’t continue is that \(R\) is a function not only of \(p_i\) but also of the proportion of people who are no longer susceptible. But if that proportion is low, which you very much want it to be, then everything is, more, or less, exponential, and really tiny changes in \(R\) can cause huge explosions.</p>
<p>To control the epidemic over any length of time you need to keep \(R = 1 - \epsilon\) where \(\epsilon \ll 1, \epsilon > 0\): you want to do this because the epidemic will die out so long as \(R < 1\), but the social and economic cost of keeping it significantlly below 1 for any length of time is enormous. And for an epidemic which has infected, and therefore killed, only a relatively small proportion of the population then \(R \approx R_0\). So the useful thing to look at is \(\ln R\) & \(\ln R_0\), as this shows small changes near \(R = 1, R_0 = 1\) which is where all the action is<sup><a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-5-definition" name="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-5-return">5</a></sup>.</p>
<p>Here’s a plot of \(\ln R\) and \(\ln R_0\) for the Cummings-Johnson on day 120 2% variant, and the mitigated version without the 2% bump:</p>
<div class="figure"><img src="/fragments/img/2020/cummings-johnson/cummings-2pct-20200603-rs.svg" alt="ln R, ln R0, Cummings-Johnson on day 120, 2% and mitigated" />
<p class="caption">ln R, ln R0, Cummings-Johnson on day 120, 2% and mitigated</p></div>
<p>Interestingly you can see that, for \(d \gtrapprox 500\) the Cummings 2% \(R\) is <em>lower</em> than the mitigated \(R\). But it’s significantly higher for \(d \in [120, 200)\) and somewhat higher for \(d \in [200, 300)\) (although less than 1 in the second interval).</p>
<p>So, well, very small changes for parameters in exponential processes can make very large differences: that should be obvious.</p>
<p>It certainly would be the case that runs with more principled values for things (for instance my ‘decaying Cummings-Johnson effect’ is pretty <em>ad-hoc</em>: it would be better to model it by having some increase which exponentially decays with time: \(p_i = p_{i0}e^{-(t - t_0)/\tau}\) as people forget, which would be easy to model. Maybe I will have a go at that in due course.</p>
<h2 id="how-many-people-will-cummings-and-johnson-kill">How many people will Cummings and Johnson kill?</h2>
<p>I don’t know. This model is not adequate to give a numerically-correct answer by a long way: it’s full of assumptions, and is in any case an extremely oversimplified model<sup><a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-6-definition" name="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-6-return">6</a></sup>.</p>
<p>But I couldn’t get the number of people they will kill lower than 1,900, and I worked fairly hard to get it that low. I think my model is too sensitive, even though the numbers of people it kills for the mitigated epidemic are pretty reasonable and I did not fine-tune it for that, so I expect the real number will be somewhere between many hundreds and a few thousand. This is somewhere between mass murder and genocide<sup><a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-7-definition" name="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-7-return">7</a></sup>.</p>
<p>Did Cummings & Johnson do this deliberately? Probably not. Are these the only people they will kill, or even most the people they will kill, due to their ideological, careless and incompetent handling of the epidemic and other things? No. Would the harm have been reduced if Johnson had promptly sacked Cummings? Yes. Would the harm still be reduced if he were to sack him now? Yes. Will he sack him? Of course not. Do either of them care that they will kill a lot of people? Definitely not: the people they have killed and will kill are only little people, like ants.</p>
<p>This is the glorious work of Dominic Cummings, aided and abetted by his idiot stooge, Boris Johnson.</p>
<blockquote>
<p><em>Don’t expect me to lie,</em>
<br /><em>don’t expect me to cry,</em>
<br /><em>don’t expect me to die for thee.</em></p></blockquote>
<hr />
<div class="footnotes">
<ol>
<li id="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-1-definition" class="footnote-definition">
<p>He says he did not take breaks. This seems a deeply implausible claim given that he drove 260 miles with a small child in the car. <a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-1-return">↩</a></p></li>
<li id="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-2-definition" class="footnote-definition">
<p>Which, again, he claims none of his family visited. <a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-2-return">↩</a></p></li>
<li id="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-3-definition" class="footnote-definition">
<p>Another option is that the epidemic becomes globally extinct, when leakage would stop: this seems unlikely. <a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-3-return">↩</a></p></li>
<li id="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-4-definition" class="footnote-definition">
<p>This is not really helpful as it makes the plots harder to read. <a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-4-return">↩</a></p></li>
<li id="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-5-definition" class="footnote-definition">
<p>In my model I’m treating \(R_0\) as something you adjust via changes to \(p_i\), rather than a constant of the epidemic. \(R_0 = p_i t_i\), and I am adjusting \(p_i\). It would perhaps be better to say \(R_0 = p_{i,0}t_i\) and then define \(p_i = p_{i,0} - p_{i,m}\), where \(p_{i,m}\) is the parameter you adjust, and use that together with the proportion of people remaining susceptible to define \(R\): it doesn’t make any difference to what actually happens though. <a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-5-return">↩</a></p></li>
<li id="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-6-definition" class="footnote-definition">
<p>I would be extremely interested in results about the Cummings-Johnson effect from more serious models. Please get in touch if you know of any. I am happy to sign nondisclosure agreements if need be. <a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-6-return">↩</a></p></li>
<li id="2020-06-02-the-glorious-work-of-dominic-cummings-footnote-7-definition" class="footnote-definition">
<p>Since we know that <a href="https://www.bbc.co.uk/news/uk-52219070">BAME people are disproportionately affected by CV19</a> this really is looking like genocide. Perhaps not a deliberate one, but I wonder how much Cummings & Johnson care that a bunch of BAME people will die because of their actions? Not much, I should think. <a href="#2020-06-02-the-glorious-work-of-dominic-cummings-footnote-7-return">↩</a></p></li></ol></div>An open letter to Michael Johnstonurn:https-www-tfeb-org:-fragments-2020-05-18-an-open-letter-to-michael-johnston2020-05-18T19:25:23Z2020-05-18T19:25:23ZTim Bradshaw
<p>Michael Johnston runs a website dedicated to photography. He also promotes anti-scientific nonsense about audio: you should not support him.</p>
<!-- more-->
<p>[This was an email I never sent: in the end I got fed up and was a lot more rude. I don’t regret that, but perhaps I should. This was also written before COVID–19: it’s pretty clear that anti-scientific behaviour by the US administration is killing tens of thousands of people, which makes this a lot more urgent (although not, in fact, more serious).]</p>
<p>After thinking about it for a few months I have decided to stop my Patreon subscription to TOP.</p>
<p>I’m doing so as a result of your audiophile posts. I don’t want to discuss these in detail, but I think we can agree that these are explicitly and consciously anti-scientific in nature: you have said, for instance, that you would not accept double-blind experiments<sup><a href="#2020-05-18-an-open-letter-to-michael-johnston-footnote-1-definition" name="2020-05-18-an-open-letter-to-michael-johnston-footnote-1-return">1</a></sup>. Since sufficiently-blinded experiments are the <em>only</em> way to remove human bias from experimental results this means you are explicitly, consciously and publicly rejecting science.</p>
<p>I don’t have any problem with what you think about hifi in private — indeed I probably have more fancy hifi than most people, and have built several amplifiers including one valve (tube) one. However, I am not willing to help fund you, or anyone, in making anti-scientific statements in areas where science applies<sup><a href="#2020-05-18-an-open-letter-to-michael-johnston-footnote-2-definition" name="2020-05-18-an-open-letter-to-michael-johnston-footnote-2-return">2</a></sup>.</p>
<p>We live in a world which is built on science: you and I are probably only alive as a result of the work of scientists, and you certainly have working eyes only because of science. We also live in a world where scientists are telling us that unless we take quite urgent action to address environmental problems — largely but not only anthropogenic climate change — we are in extremely bad trouble. Unless we address climate change <em>soon</em> our grandchildren’s generation will have blighted lives and many of them will die in horrible circumstances.</p>
<p>Well, a lot of people don’t like this: they have vested interests in not fixing the problem in the short term, will be dead in the long term and either do not care about their descendants or expect that they will be wealthy enough to fence themselves off as the environment degrades. And they certainly do not care about anyone <em>else’s</em> descendants, especially if those people live far away or look different.</p>
<p>Those who don’t want the problem fixed need other people not to listen to the scientists, or not to believe what they hear. One way they achieve this is by casting doubt on science itself: by casting doubt, ultimately, on the concept that there is such a thing as ‘objective truth’ in areas where we should expect there to be. They have been astonishingly successful at this in the last few years. Of course the side-effects are terrible: people who no longer believe that science works or that truth exists also don’t believe, for instance, that the evidence that vaccination works is real. But the things vaccination protects you against do not care about what you think is real, they only care about what is in fact real: whether you have immunity to them or not, or whether the population as a whole has enough immunity to stop epidemics. And immunity is falling and many children will die. But not the children of the vested-interest people: just other children who they care nothing about.</p>
<p>And that’s what’s coming reasonably soon: in the longer term the result of people not believing the science of climate change and not doing anything about it is going to be billions of additional deaths and billions more shortened lives, and the loss of most or all of our culture.</p>
<p>This is not some conspiracy theory: all this is going on quite openly both in your country and mine.</p>
<p>Well, why does what you say about hifi matter? You’re not, after all, denying anthropogenic climate change or supporting the anti-vaccination nonsense. Why do I care that some middle-aged photographer has whacky unscientific ideas about hifi? I care for two reasons.</p>
<ul>
<li>You don’t get to pick and choose: in the areas where science works, it works, and if you say it does not work in one area the message is that, well, you get to choose when to believe what it tells you or not based on what you want to be true. That is toxic as it means that people just get to choose what they think is true as suits them, <em>which is the whole problem</em><sup><a href="#2020-05-18-an-open-letter-to-michael-johnston-footnote-3-definition" name="2020-05-18-an-open-letter-to-michael-johnston-footnote-3-return">3</a></sup>.</li>
<li>You have a significant audience: people read your blog and some of them are inevitably influenced by what you say.</li></ul>
<p>Finally, why does it matter? It already seems clear that we’re not going to deal with anthropogenic climate change and that the truth-deniers have won: just look at the politics of the last four years. Why should I care that I’m funding a little more of it? Well, that’s true: I think that there is very little hope, and what hope there is left is fading fast. We have perhaps 50 years or so before things get really bad, and far less than that before there is no chance of preventing the catastrophe. Long before that the corrosion of truth will have less serious but still horrible consequences: we are seeing some of them now. The future is not bright.</p>
<p>But there is <em>some</em> hope. Not, perhaps, much hope but there is still some. And I believe that what little I can do I should do to increase the amount of hope, and to decrease the corrosion of truth, in all its forms. And what you are doing is corroding truth. You are only doing it in a small way, but you are doing it. I can only make a difference in a small way, but not supporting TOP is a difference I can make.</p>
<p>This is why I will no longer support TOP financially.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2020-05-18-an-open-letter-to-michael-johnston-footnote-1-definition" class="footnote-definition">
<p>Although it was not clear you knew what a double-blind experiment really was. <a href="#2020-05-18-an-open-letter-to-michael-johnston-footnote-1-return">↩</a></p></li>
<li id="2020-05-18-an-open-letter-to-michael-johnston-footnote-2-definition" class="footnote-definition">
<p>Science does not apply everywhere, it should have nothing to say about what makes a great photograph, or what makes good bokeh for instance, in my opinion. For that matter it should have nothing to say about what makes hifi sound good in the cases where distinctions really exist. I <em>like</em> how my valve amplifier sounds, but I don’t pretend I like how it sounds because it is has lower distortion than any reasonable transistor amplifier: I like how it sounds just because it has significant distortion and I like the sound of that distortion. The same is true for records, which I also prefer to CDs, and which also are objectively and measurably far worse in terms of fidelity. <a href="#2020-05-18-an-open-letter-to-michael-johnston-footnote-2-return">↩</a></p></li>
<li id="2020-05-18-an-open-letter-to-michael-johnston-footnote-3-definition" class="footnote-definition">
<p>Note again that I don’t think science is useful for, say, judging photographs as art or cameras or hifi as desirable objects: this is not about that. <a href="#2020-05-18-an-open-letter-to-michael-johnston-footnote-3-return">↩</a></p></li></ol></div>Sexism in computer scienceurn:https-www-tfeb-org:-fragments-2020-05-09-sexism-in-computer-science2020-05-09T17:16:02Z2020-05-09T17:16:02ZTim Bradshaw
<p>Anyone who says that the facts show that men are innately better than women in computing either does not know the facts, does not understand them, or is lying.</p>
<!-- more-->
<h2 id="the-facts">The facts</h2>
<p>In 1971, about 14% of US computer science and information science graduates were women. By 1984, about 38% were. But by 2011 the proportion had fallen to under 18%<sup><a href="#2020-05-09-sexism-in-computer-science-footnote-1-definition" name="2020-05-09-sexism-in-computer-science-footnote-1-return">1</a></sup>. Here is a graph of the proportions by year from 1971 to 2011:</p>
<div class="figure"><img src="/fragments/img/2020/sexism-in-cs/cs-is-graduate-ratio-us-1971-1981.svg" alt="CS & IS graduate ratio, US, 1971-2011" />
<p class="caption">CS & IS graduate ratio, US, 1971–2011</p></div>
<h2 id="what-the-facts-show">What the facts show</h2>
<p>This entire process happened in about two generations: the proportion of women more than doubled in less than one generation, and then about halved in a generation: some of the women studying CS in 2011 could be the daughters of the cohort of 1984, and the granddaughters of the 1970 cohort.</p>
<p>No genetic change in a human population can happen this fast: evolution operates on timescales of thousands to millions of years, not over a small number of decades. This means that <em>whatever caused these changes was not a change in innate ability</em>. There simply can be no question about that: there must be some other explanation, since the innate ability of women to do computer science, or any other innate ability, cannot have changed significantly over this period.</p>
<p>This means that the changes were caused by something environmental. Perhaps in 1984 there was enormous positive discrimination, or in 1970 and 2011 there was enormous negative discrimination, or some combination of the two<sup><a href="#2020-05-09-sexism-in-computer-science-footnote-2-definition" name="2020-05-09-sexism-in-computer-science-footnote-2-return">2</a></sup>.</p>
<p>This data is also perfectly compatible with the conclusion that women may be innately as good at computing as men: 38% is not very far from 50%, and if we assume some level of sexism in 1984<sup><a href="#2020-05-09-sexism-in-computer-science-footnote-3-definition" name="2020-05-09-sexism-in-computer-science-footnote-3-return">3</a></sup> it is easily possible that the underlying figure was 50%.</p>
<p>What this data tells us, unambiguously, that whatever has caused these changes is <em>environmental</em>, and is not due to any differences in innate ability as such changes simply cannot happen over this timescale. It also tells us that things have got a lot more skewed since 1984: progress in this area has not only stopped, it is being reversed and has been so since the mid 1980s: the situation now is only about 28% less skewed than it was in 1970.</p>
<h2 id="what-the-facts-dont-show">What the facts don’t show</h2>
<p>What the data does <em>not</em> say is why this has happened, except that it is not due to changes in innate ability.</p>
<p>While it is almost certain that there was strong institutional discrimination against women in 1970, it seems unlikely that, in 2011, there was any kind of institutional discrimination, as this would be illegal and institutions are pretty good targets for legal action. So it seems unlikely that the decline is due to <em>institutional</em> discrimination. However all the data says is that there has been a decline: not why.</p>
<p>If we assume that most of the change is not due to institutional discrimination then it’s tempting to speculate on what <em>did</em> cause it. Well, I’m not going to do that: I have theories but they are based either on no evidence or on anecdotal evidence. Perhaps someone has done proper research into the causes, but I don’t know. There is a vast surfeit of theories based on little or no data, and outright made-up stuff on the internet — wild speculation, outright lies and ‘alternative facts’<sup><a href="#2020-05-09-sexism-in-computer-science-footnote-4-definition" name="2020-05-09-sexism-in-computer-science-footnote-4-return">4</a></sup> — and people are dying of this surfeit: I won’t add any more to it.</p>
<p>One possible inference is that women who, today, succeed at computing degrees, have done so against significant odds. It’s very likely that this means that they are <em>better</em> than men who achieve the same grades. So companies, if they are legally able to, might consider actively selecting female candidates for jobs, on the grounds that they are, probably, better.</p>
<h2 id="related-lies-and-confusions">Related lies and confusions</h2>
<p>In any area where people make a claim that some group is innately better than some other group based on some metric, and where the scores of one or both of those groups has changed radically over time, then it is immediately safe to conclude that those claims are either lies, confusions or both, because either the metric is junk, or it is not measuring innate ability. The obvious example of this is racial ‘science’.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2020-05-09-sexism-in-computer-science-footnote-1-definition" class="footnote-definition">
<p><a href="https://nces.ed.gov/programs/digest/d12/tables/dt12_349.asp">Source</a>. Later figures may be available, but I couldn’t find them. I also don’t have the figures for other countries but I expect they are broadly similar. <a href="#2020-05-09-sexism-in-computer-science-footnote-1-return">↩</a></p></li>
<li id="2020-05-09-sexism-in-computer-science-footnote-2-definition" class="footnote-definition">
<p>I worked in academic computing from shortly after 1984 to the late 1990s and although I am not female I can say with some certainty that there was not enormous positive discrimination. <a href="#2020-05-09-sexism-in-computer-science-footnote-2-return">↩</a></p></li>
<li id="2020-05-09-sexism-in-computer-science-footnote-3-definition" class="footnote-definition">
<p>Again, in my experience there was some level of sexism in academia in this period. <a href="#2020-05-09-sexism-in-computer-science-footnote-3-return">↩</a></p></li>
<li id="2020-05-09-sexism-in-computer-science-footnote-4-definition" class="footnote-definition">
<p>Which are, of course, lies. <a href="#2020-05-09-sexism-in-computer-science-footnote-4-return">↩</a></p></li></ol></div>The revenge of the bloburn:https-www-tfeb-org:-fragments-2020-03-18-the-revenge-of-the-blob2020-03-18T12:44:58Z2020-03-18T12:44:58ZTim Bradshaw
<p><em>And ye shall know the truth, and the truth shall make you free</em>.</p>
<!-- more-->
<h2 id="the-blob">The blob</h2>
<p>It has been very fashionable among populist politicians and their supporters to fulminate against ‘the blob’. The blob is:</p>
<ul>
<li>the civil service;</li>
<li>journalists;</li>
<li>news reporting organisations other than ones that report ‘good’ news<sup><a href="#2020-03-18-the-revenge-of-the-blob-footnote-1-definition" name="2020-03-18-the-revenge-of-the-blob-footnote-1-return">1</a></sup>;</li>
<li>the BBC in particular;</li>
<li>scientists, especially climate scientists;</li>
<li>economists;</li>
<li>experts of all kinds;</li>
<li>judges;</li>
<li>the whole legal system;</li>
<li>the liberal/metropolitan elite in general, however it is defined;</li>
<li>the deep state, whatever that may mean;</li>
<li>the reality-based community<sup><a href="#2020-03-18-the-revenge-of-the-blob-footnote-2-definition" name="2020-03-18-the-revenge-of-the-blob-footnote-2-return">2</a></sup>;</li>
<li>anyone who disagrees with whatever plan is in favour this week, or points out that it is not possible, will be economically catastrophic, is illegal, or anything inconvenient like that.</li></ul>
<p>The blob is an amorphous group of people who all think the same way and who all are somehow trying to prevent whatever transformative programme the populist wants to embark on. Which people exactly constitute the blob varies from time to time and populist to populist. Whatever the blob thinks is wrong, and the blob must therefore be eliminated so that we can all get things done<sup><a href="#2020-03-18-the-revenge-of-the-blob-footnote-3-definition" name="2020-03-18-the-revenge-of-the-blob-footnote-3-return">3</a></sup> and, rejoicing in our inevitable victory, march forward to the sunlit uplands of the glorious future that awaits those lucky elect over whom we will rule in splendour for a thousand years.</p>
<h2 id="populism">Populism</h2>
<p>I’m sure there are many elaborate definitions of what it means to be a populist. One fashionable idea is that populists somehow side with ‘the people’, who are good, against ‘the elite’ (<em>aka</em>, of course, ‘the blob’) who are bad. But definitions vary a lot depending on who is making them and when they made them<sup><a href="#2020-03-18-the-revenge-of-the-blob-footnote-4-definition" name="2020-03-18-the-revenge-of-the-blob-footnote-4-return">4</a></sup>. One defining characteristic is that</p>
<blockquote>
<p>populists seek to gain power by providing simple, appealing answers to complex, unappealing problems.</p></blockquote>
<p>These answers are almost always wrong because problems which have easy answers have already been solved and are no longer problems. The definition of a ‘complex, unappealing’ problem is one which does not have a simple, appealing answer.</p>
<p>But it doesn’t matter whether the answers are wrong: they are appealing and easy to understand, and the populist aims to ride that to power. Consider this problem:</p>
<blockquote>
<p>foreigners are coming to our country and eating our children!</p></blockquote>
<p>Well, if you think about it, this is really not that simple to solve: there are quite strong taboos amongst humans about eating other humans — <a href="https://en.wikipedia.org/wiki/The_Man-Eating_Myth" title="The man-eating myth (Wikipedia)">many, perhaps all, claims of large-scale cannibalism turn out not to be true</a> — and there are even stronger taboos against eating children. So what is making these foreigners so desperate that they feel they need to eat children? Even more so, why are they coming here to do it: are there no children to eat locally? Perhaps they have eaten all their own children: but then why haven’t they died out? It’s all, really, quite complicated.</p>
<p>But the populist doesn’t care about this as they have a simple answer:</p>
<blockquote>
<p>THROW THESE LOATHSOME CHILD-EATING FOREIGNERS OUT! LET THEM EAT THEIR OWN CHILDREN! BUILD THE WALL! REMEMBER THE SPIRIT OF THE BLITZ! ENGLAND FOR THE ENGLISH!!!</p></blockquote>
<p>Well, that will certainly fix it, at least until the populist has ridden the wave of disgusted horror at the unspeakable behaviour of these horrible baby-eating foreigners<sup><a href="#2020-03-18-the-revenge-of-the-blob-footnote-5-definition" name="2020-03-18-the-revenge-of-the-blob-footnote-5-return">5</a></sup> to power, wealth and glory.</p>
<p>And what happens then? What happens when the whole problem turns out to be intractable after all? There are a range of answers to that.</p>
<p>People have pretty short memories so a good approach is just to try to forget, either that the problem existed at all, or that the solution was ever suggested. When you come up against people who <em>do</em> remember then you can simply ignore them, or deny that you ever offered the solution or in fact that the problem ever existed at all. While doing so be sure to imply that these inconvenient long-memoried people are acting in bad faith somehow, or are acting against the will of the people which you, of course, represent.</p>
<p>Blaming someone else is also a good approach: of course the problem would be solved by now but the liberal elite — mostly made up of foreigners<sup><a href="#2020-03-18-the-revenge-of-the-blob-footnote-6-definition" name="2020-03-18-the-revenge-of-the-blob-footnote-6-return">6</a></sup> and people who are, you know, <em>different</em> — is preventing the solution for reasons of their own which you will hint, but never quite say, are because they quite like a bit of children-eating themselves, as rootless cosmopolitans<sup><a href="#2020-03-18-the-revenge-of-the-blob-footnote-7-definition" name="2020-03-18-the-revenge-of-the-blob-footnote-7-return">7</a></sup> tend to do. Certainly they are enemies of the people and something must be done about them (that ‘something’ may, you will imply, involve camps).</p>
<p>Of course you can simply lie that the problem has been solved when it has not been. If it’s not, in fact, really a very severe problem — only one baby was ever eaten, and it turns out that the evidence for even that is pretty apocryphal — then you can just declare it solved and move on.</p>
<p>A final, brilliant, approach is to <em>make up problems which do not exist</em> and then, later, declare them solved. Foreigners certainly no longer come here and eat our children: this is, therefore, a problem our glorious leader has solved! Crime also is no longer rising and this too is something which the great chief has strived day and night to achieve and why he must be elected as leader for life. Do not mention that crime was not rising previously, still less that it now is: only enemies of the people with their annoying facts would do that.</p>
<p>What is it that makes the populist’s appealing answers so appealing? What, exactly, do they appeal to? Well, the answer is obvious: just look at the answers that populists give. They appeal to the things that, secretly, ‘everyone knows’ are true: to things that people perhaps think but, until recently and not always even now, most people have not dared to say in public for the last few decades; they appeal to instinct, to intuition, to prejudice, to bigotry. But they never appeal to rationality.</p>
<p>So this is because, secretly, everyone is a bigot, right? No, it’s not: a fair number people <em>are</em> secretly — and, increasingly, not so secretly — bigots of course, but by no means everyone is. Until fairly recently the proportion had also almost certainly been declining for decades. Rather this is because populists are dealing with an awkward truth: <em>there is no division between ‘the people’ and ‘the elite’</em>: there are just people, belonging to a myriad different intersecting groupings, with each person usually belonging to many groups. But mostly, there are just <em>people</em>.</p>
<p>So the populist has to <em>invent</em> groups of people to set against each other, and then to persuade enough people that they belong to the ‘good’ group <em>aka</em> ‘the people’ by various rhetorical tricks. There’s no ‘white working class’<sup><a href="#2020-03-18-the-revenge-of-the-blob-footnote-8-definition" name="2020-03-18-the-revenge-of-the-blob-footnote-8-return">8</a></sup>, until you talk about it enough, and then suddenly there is. Indeed there is no England, until you persuade enough people that, well, English people are not the same as Scottish or Welsh people, and definitely not the same as people who live on the wrong side of some water who, really, are barely people at all. There is certainly no blob until you persuade enough people that there is, and that the people in it are bad people and should most definitely not be listened to and perhaps, in due course, be eliminated. Not surprisingly a good way to invent these groups is by invoking bigotry, because bigotry is entirely about creating artificial divisions between groups of people.</p>
<p>What they are doing is something physicists call <a href="https://en.wikipedia.org/wiki/Symmetry_breaking" title="Symmetry breaking">‘symmetry breaking’</a>, which is a process where initially tiny differences get blown up so they become very large. And they’re doing this so that they can construct a large group of people who will support them, and force into existence one or more other groups who can be identified as the enemy, and who can be blamed for all bad things. A good example of this process is sentiment in the UK about the EU: this was simply not a major issue between 1990 and 2010; yet from 2016 until COVID–19 displaced it in early 2020 it entirely dominated UK politics. The populists have, quite brilliantly, divided the country into ‘the people’ who now desperately want to escape the EU they were hardly aware of only a few years before and a despised elite who are supposed to be plotting to prevent this, and the populists have ridden the division they have invented to power. Brexit is a canonical recent example of gaining power by providing simple, appealing, and wrong answers to complex, unappealing problems.</p>
<h2 id="populists">Populists</h2>
<p>What sort of people do this?</p>
<ul>
<li>Actual bigots, such as Trump, Bolsonaro, Bannon & others. They are not pretending to hate people they see as different, they really do hate them.</li>
<li>People for whom personal power and glory matters above all else, such as Trump & Johnson. Populism is an easy way of gaining power if what you care about is power rather than the welfare of the people over whom you have power, so people who care about having power above everything tend to be populists when they can’t be despots, or perhaps as a route to becoming despots<sup><a href="#2020-03-18-the-revenge-of-the-blob-footnote-9-definition" name="2020-03-18-the-revenge-of-the-blob-footnote-9-return">9</a></sup>. Almost all politicians are interested in personal power, of course: populists are different because <em>anything</em> else can be traded for power. Does anyone really think that Johnson really believes in brexit? Of course he does not: he believes only in Johnson, and he will support anything that furthers that cause.</li>
<li>Cranks, such as Cummings and perhaps Bannon: the true believers. These people are often sidekicks or advisors & are by far the most interesting group, and perhaps the most dangerous one as well.</li></ul>
<p>Populists are <em>not</em> people who merely want power, or who are involved in extracting money from their position or other forms of corruption: populists <em>do</em> want power, with the possible exception of the cranks, and usually <em>are</em> corrupt, but these things are true of almost all politicians and is not a useful distinguishing feature of populists<sup><a href="#2020-03-18-the-revenge-of-the-blob-footnote-10-definition" name="2020-03-18-the-revenge-of-the-blob-footnote-10-return">10</a></sup>.</p>
<h2 id="cranks">Cranks</h2>
<p>Anyone who has worked as a scientist or with scientists or in many other fields will have come across cranks. These are the people who have disproved special relativity, who can show that quantum mechanics is incorrect, who believe in perpetual motion and who want to tell you about it in endless, excruciating detail. They seem annoying but harmless until suddenly they aren’t: suddenly they’re refusing to vaccinate their children causing measles outbreaks and threatening herd immunity; suddenly they are destroying telecommunications infrastructure; suddenly they are advocating eugenics and ‘scientific’ racism; suddenly they believe the apocalypse is coming; suddenly, they are the chief advisers to the president or the prime minister.</p>
<p>It’s easy to think that cranks are just stupid people, but they’re not: <em>Trump</em> is stupid and Johnson is superficial, but whatever Cummings & Bannon are they’re not stupid. Instead I think that the distinguishing feature of cranks is that</p>
<blockquote>
<p>cranks don’t realise when they don’t understand something<sup><a href="#2020-03-18-the-revenge-of-the-blob-footnote-11-definition" name="2020-03-18-the-revenge-of-the-blob-footnote-11-return">11</a></sup>.</p></blockquote>
<p>So, for example, if I try to understand <a href="https://en.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27s_Last_Theorem" title="Wiles's proof of Fermat's last theorem">Wiles’s proof of Fermat’s last theorem</a> I very quickly realise that it is beyond my understanding: perhaps if I spent the rest of my life on it I could understand it, eventually. But in practice I couldn’t because I just don’t have enough intuition for that sort of maths, and almost certainly I am also just not clever enough. That doesn’t happen for a crank: if they start off trying to understand special relativity and fail to do so they never recognise that they have failed. Instead, when they start trying to do calculations and get answers which disagree with special relativity or are inconsistent, they conclude that <em>everyone else is wrong</em>: that they alone understand special relativity or that they alone understand what is wrong with it. This then leads to some bad places.</p>
<blockquote>
<p>Why does everyone refuse to listen to me when I try to explain how they are wrong about special relativity? Why won’t they publish my papers? Why do they all claim to think the same thing when it’s so obviously wrong? It’s group-think! Do they really believe what they profess to believe or are they hiding something? Is there some kind of hidden conspiracy of elite scientists trying to suppress the truth, which I have now exposed? And wasn’t Einstein, the founder of the conspiracy, Jewish? Why yes, he was. What is really going on here? Why is the cosmopolitan elite suppressing the truth? Are they in league with the financiers? How are the climate scientists involved? What are they concealing from the common, decent, everyday working folk? The truth is out there, if you will only look, however hard the hidden superiors try to conceal it! THE TRUTH IS OUT THERE.</p></blockquote>
<p>Not all cranks are populists, but it’s pretty easy to see why populism attracts cranks: the intellectuals of populism are cranks.</p>
<p>The trouble with populist cranks is that they <em>really believe</em> what they profess to believe. The bigots are just little knots of fear and hatred, the power-seekers don’t really believe anything at all, but the cranks have constructed vast thought palaces which may even, at first sight, seem plausible. And the cranks are not stupid: their simple, appealing answers don’t work because complicated problems simply don’t have simple appealing answers, but they can and will argue for them endlessly in enormous and incomprehensible detail. Arguing with a crank is like fighting an octopus: whenever you think you’re winning there’s another tentacle to deal with.</p>
<p>And the cranks really hate the blob, because there’s a reason the blob disagrees with them: the cranks are wrong. But the cranks are now in power: they have won and they are going to destroy the blob so they’ll never have to listen to all the reasons they’re wrong again. The octopus now has an infinite number of tentacles, and a flamethrower.</p>
<h2 id="against-the-blob">Against the blob</h2>
<p>What the blob represents is <em>truth</em>: the truth discovered by good journalism, the truth uncovered by the legal system, the truth discovered by scientists and economists. And the populists hate the truth because their programme is built on lies. The bigots hate truth because it exposes their bigotry for the lies it is, and also simply because they are made of hate; the power-seekers hate the truth because they have built their path to power on lies; finally the cranks hate the truth because they don’t understand what truth is.</p>
<p>And so the populists set out to destroy the blob, and with it any notion of truth. The BBC must be eliminated because it tries to keep its reporting unbiased<sup><a href="#2020-03-18-the-revenge-of-the-blob-footnote-12-definition" name="2020-03-18-the-revenge-of-the-blob-footnote-12-return">12</a></sup> and to uncover the truth, rather than this week’s alternative truth. Science must be discredited because the facts it uncovers may be inconvenient, and similarly economists and experts of all kind must go as they seek to point out the gaping holes in the cascade of lies the populists tell: we have, after all, had enough of experts. The legal system must be dismantled and reassembled to suit the populist agenda.</p>
<p>When this is done there will be no truth left: all will be lies and nothing will matter. Any facile answer to a problem can be given to anyone and anyone who points out that it is false or impossible will, if they have not already been dealt with, simply be eliminated. This is what the populists seek to achieve: the death of truth.</p>
<h2 id="the-revenge-of-the-truth">The revenge of the truth</h2>
<p>Once the truth is dead, simple appealing answers to complex, unappealing problems — otherwise known as lies — are, well, simple and appealing: combined with appeals to the substantial minority of secret bigots & conspiracy theorists they’ve worked pretty well for the populists. Once the blob is eliminated who, really, will care if the answers are wrong? So the good honest people will be poorer once they have gloriously been marched into the sunlit uplands; but they won’t be <em>much</em> poorer and they probably won’t notice. If they do notice, well, look at those cosmopolitan elite Europeans who have made use of their elitist skills to not be so poor: it’s their fault, we should, you know, do something about them, too. So the fruit will rot in the fields for want of people to pick it but we can’t allow those elite dusky foreigners here to pick it: we never liked fruit anyway. And of course the children and grandchildren of the working folk are going to live blighted lives because we chose to treat climate change as some conspiracy of elite blob scientists and anyway doing anything about it would have hurt our investment portfolios<sup><a href="#2020-03-18-the-revenge-of-the-blob-footnote-13-definition" name="2020-03-18-the-revenge-of-the-blob-footnote-13-return">13</a></sup>; but, well, we’ll be long gone by then and who really cares about their children? What sort of person even knows how many children he has?</p>
<p>And then, suddenly, not. Suddenly there’s a complex, unappealing problem which is killing people, today. Suddenly you are a faced with a problem which simply does not care about the lies you tell: it cares only about the truth. You can’t lie to something which is not sentient. Suddenly your simple, appealing answers are going to cause tens or hundreds of thousands of people to die, not over a few decades but over a few months, and people won’t have time to forget that it was your wrong answer that killed their friends and their family as they dig the mass graves. Shit just got real.</p>
<p>And suddenly, it turns out that the blob were, all along, not the villains they were made out to be: the boring old civil servants turn out to be good at actually administering things and understand how to deal with crises, the scientists turn out to be good at understanding what it is that is killing people and how to stop it. The BBC turn out to be good at communicating the truths people need to understand if they want to avoid dying. And experts, well, it turns out that experts turn out to be some use after all. The cranks’ tangles of mad ideas turn out to be mad. Real problems don’t get solved by a torrent of bullshit and lies: they need real solutions based on real data and real understanding. The reality-based community turn out to be useful after all. Suddenly the blob is the your best friend, at least until the crisis is over. Truth matters.</p>
<p>Or, well, you could just keep on piling lie on lie and hope no-one notices the piles of corpses rotting in the streets. It’s the American way.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2020-03-18-the-revenge-of-the-blob-footnote-1-definition" class="footnote-definition">
<p>‘Good’ news is, of course, fake news, but not ‘fake news’, which is good news. <a href="#2020-03-18-the-revenge-of-the-blob-footnote-1-return">↩</a></p></li>
<li id="2020-03-18-the-revenge-of-the-blob-footnote-2-definition" class="footnote-definition">
<p>‘The aide said that guys like me were “in what we call the reality-based community”, which he defined as people who “believe that solutions emerge from your judicious study of discernible reality. [But] that’s not the way the world really works anymore”.’ Yes, <a href="https://en.wikipedia.org/wiki/Reality-based_community" title="The reality-based community">really</a> <a href="#2020-03-18-the-revenge-of-the-blob-footnote-2-return">↩</a></p></li>
<li id="2020-03-18-the-revenge-of-the-blob-footnote-3-definition" class="footnote-definition">
<p>Which things exactly need to get done doesn’t matter very much so long as they have memorable names. The important thing is to do something, something <em>important</em>, something <em>transformative</em>, something that respects the <em>will of the people</em> to which all populists have immediate unconscious access. <a href="#2020-03-18-the-revenge-of-the-blob-footnote-3-return">↩</a></p></li>
<li id="2020-03-18-the-revenge-of-the-blob-footnote-4-definition" class="footnote-definition">
<p>I have heard people described as populists because they let the banks get away with things they shouldn’t have in the run up to the financial crisis of 2007–2008, and because they gave peerages to their friends. These are, at best, very odd definitions of ‘populism’: although these activities certainly made the people concerned popular with a group of people, that group of people was ‘bankers and the friends of politicians’, who are not really ‘the people’. <a href="#2020-03-18-the-revenge-of-the-blob-footnote-4-return">↩</a></p></li>
<li id="2020-03-18-the-revenge-of-the-blob-footnote-5-definition" class="footnote-definition">
<p>Although it may not be stated there will, of course, be no doubt that the filthy baby-eaters are both ‘dusky’ and have ‘watermelon smiles’, even when they are not looking like letter boxes. <a href="#2020-03-18-the-revenge-of-the-blob-footnote-5-return">↩</a></p></li>
<li id="2020-03-18-the-revenge-of-the-blob-footnote-6-definition" class="footnote-definition">
<p>Not, of course, the baby-eating kind. On the other hand you never know: what <em>do</em> they eat at their elite dinner parties? <a href="#2020-03-18-the-revenge-of-the-blob-footnote-6-return">↩</a></p></li>
<li id="2020-03-18-the-revenge-of-the-blob-footnote-7-definition" class="footnote-definition">
<p>‘If you believe you’re a citizen of the world, you’re a citizen of nowhere. You don’t understand what the very word “citizenship” means.’ — <a href="https://www.citizen-nowhere.com/quotes/" title="Citizens of nowhere">Theresa May, 2016</a> <a href="#2020-03-18-the-revenge-of-the-blob-footnote-7-return">↩</a></p></li>
<li id="2020-03-18-the-revenge-of-the-blob-footnote-8-definition" class="footnote-definition">
<p>There is a working class, or at least there was, but does it really matter what colour your skin is if you belong to it? Is the implicit ‘black working class’ distinct in any way other than the colour of its members’ skin? Why would anyone who was not trying to create division where none really exists use the term ‘white working class’? <a href="#2020-03-18-the-revenge-of-the-blob-footnote-8-return">↩</a></p></li>
<li id="2020-03-18-the-revenge-of-the-blob-footnote-9-definition" class="footnote-definition">
<p>In so far as he is capable of planning, this seems likely to be Trump’s plan. <a href="#2020-03-18-the-revenge-of-the-blob-footnote-9-return">↩</a></p></li>
<li id="2020-03-18-the-revenge-of-the-blob-footnote-10-definition" class="footnote-definition">
<p>Although Trump is corrupt in a deeply spectacular way. <a href="#2020-03-18-the-revenge-of-the-blob-footnote-10-return">↩</a></p></li>
<li id="2020-03-18-the-revenge-of-the-blob-footnote-11-definition" class="footnote-definition">
<p>For cranks, there are no known unknowns, only unknown unknowns. <a href="#2020-03-18-the-revenge-of-the-blob-footnote-11-return">↩</a></p></li>
<li id="2020-03-18-the-revenge-of-the-blob-footnote-12-definition" class="footnote-definition">
<p>‘The corporation is either “stacked full of right-wingers” (as a Guardian columnist complained) or so lefty that even its “Sherlock” detective drama contains anti-Tory messages (as claimed by the Daily Mail). Yet polling by the Reuters Institute finds that the BBC reaches an audience that is broadly in the middle of the political spectrum. This contrasts with its main commercial rivals, ITV and Sky, whose viewers lean to the right, and with public broadcasters in other countries, whose audiences usually lean left’ — <a href="https://www.economist.com/britain/2020/04/25/the-bbc-is-having-a-good-pandemic" title="The BBC is having a good pandemic">The Economist, 25th April 2020</a> <a href="#2020-03-18-the-revenge-of-the-blob-footnote-12-return">↩</a></p></li>
<li id="2020-03-18-the-revenge-of-the-blob-footnote-13-definition" class="footnote-definition">
<p>Of course we don’t have investment portfolios, because we are simple honest people, like you. Almost everyone at Eton is the first generation of their family to have gone to school, don’t you know: their fathers were down the pit at fourteen. Of course we’re not shorting the pound: I don’t even know what that means … oh, hello, sorry I have to take this call from my, ah, friend … hello, yes, yes, 14 at 330, yes, buy Euro, yes, jolly good. <a href="#2020-03-18-the-revenge-of-the-blob-footnote-13-return">↩</a></p></li></ol></div>The U combinatorurn:https-www-tfeb-org:-fragments-2020-03-09-the-u-combinator2020-03-09T17:45:22Z2020-03-09T17:45:22ZTim Bradshaw
<p>The U combinator allows you to define recursive functions and I think it is simpler to understand than the Y combinator.</p>
<hr />
<p>It’s not obvious how things like <code>letrec</code> get defined in Scheme, without using secret assignment. In fact I think they <em>are</em> defined using secret assignment:</p>
<pre><code>(letrec ([f (λ (...) ... (f ...) ...)])
...)</code></pre>
<p>turns into</p>
<pre><code>(let ([f ...])
(set! f (λ (...) ... (f ...) ...))
...)</code></pre>
<p>But it’s interesting to see how you can define recursive functions without relying on assignment, including mutually-recursive collections of functions. One way is using the U combinator.</p>
<p>I suspect that there is lots of information about this out there, but it’s seriously hard to search for anything which looks like ’*-combinator’ now (even now I am starting a set of companies called ‘integration by parts’, ‘the quotient rule’ &c).</p>
<p>You can famously do this with the Y combinator, but I didn’t want to do that because Y is something I find I can understand for a few hours at a time and then I have to work it all out again. But it turns out that you can use something much simpler: the U combinator. It seems to be even harder to search for this than Y, but here is a quote about it:</p>
<blockquote>
<p>In the theory of programming languages, the U combinator, \(U\), is the mathematical function that applies its argument to its argument; that is \(U(f) = f(f)\), or equivalently, \(U = \lambda f \cdot f(f)\).</p></blockquote>
<blockquote>
<p>Self-application permits the simulation of recursion in the λ-calculus, which means that the U combinator enables universal computation. (The U combinator is actually more primitive than the more well-known fixed-point Y combinator.)</p></blockquote>
<blockquote>
<p>The expression \(U(U)\) is the smallest non-terminating program.</p></blockquote>
<p>(Text mildly edited from <a href="http://www.ucombinator.org/">here</a>, which unfortunately is not a site all about the U combinator other than this quote.)</p>
<h2 id="prerequisites">Prerequisites</h2>
<p>All of the following code samples are in <a href="https://racket-lang.org/">Racket</a>. The macros are certainly Racket-specific and some of the other code probably is as well. To make the macros work you will need <code>syntax-parse</code> via:</p>
<pre><code>(require (for-syntax syntax/parse))</code></pre>
<p>However note that my use of <code>syntax-parse</code> is naïve in the extreme: I’m really just an unfrozen CL caveman pretending to understand Racket’s macro system.</p>
<p>Also note I have not ruthlessly turned everything into λ: Rather than <code>((λ (...) ...) ...)</code> there is <code>(let ([... ...] ...) ...)</code> in this code; there is use of multiple values including <code>let-values</code>; there is <code>(define (f ...) ...)</code> rather than <code>(define f (λ (...) ...))</code> and so on.</p>
<h2 id="two-versions-of-u">Two versions of U</h2>
<p>The first version of U is the obvious one:</p>
<pre><code>(define (U f)
(f f))</code></pre>
<p>But this will run into some problems with an applicative-order language, which Racket is by default. To avoid that we can make the assumption that <code>(f f)</code> is going to be a function, and wrap that form in another function to delay its evaluation until it’s needed: this is the standard trick that you have to do for Y in an applicative-order language as well. I’m only going to use the applicative-order U when I have to, so I’ll give it a different name:</p>
<pre><code>(define (U/ao f)
(λ args (apply (f f) args)))</code></pre>
<p>Note also that I’m allowing more than one argument rather than doing the pure-λ-calculus thing.</p>
<h2 id="using-u-to-construct-a-recursive-functions">Using U to construct a recursive functions</h2>
<p>To do this we do a similar trick that you do with Y: write a function which, if given a function as argument which deals with the recursive cases, will return a recursive function. And obviously I’ll use the Fibonacci function as the canonical recursive function.</p>
<p>So, consider this thing:</p>
<pre><code>(define fibber
(λ (f)
(λ (n)
(if (<= n 2)
1
(+ ((U f) (- n 1))
((U f) (- n 2)))))))</code></pre>
<p>This is a function which, given another function, <code>U</code> of which computes smaller Fibonacci numbers, will return a function which will compute the Fibonacci number for <code>n</code>.</p>
<p>In other words, <em><code>U</code> of this function is the Fibonacci function</em>!</p>
<p>And we can test this:</p>
<pre><code>> (define fibonacci (U fibber))
> (fibonacci 10)
55</code></pre>
<p>So that’s very nice.</p>
<h2 id="wrapping-u-in-a-macro">Wrapping U in a macro</h2>
<p>So, to hide all this the first thing to do is to remove the explicit calls to <code>U</code> in the recursion. We can lift them out of the inner function completely:</p>
<pre><code>(define fibber/broken
(λ (f)
(let ([fib (U f)])
(λ (n)
(if (<= n 2)
1
(+ (fib (- n 1))
(fib (- n 2))))))))</code></pre>
<p><em>Don’t try to compute <code>U</code> of this</em>: it will recurse endlessly because <code>(U fibber/broken)</code> -> <code>(fibber/broken fibber/broken)</code> and this involves computing <code>(U fibber/broken)</code>, and we’re doomed.</p>
<p>Instead we can use <code>U/ao</code>:</p>
<pre><code>(define fibber
(λ (f)
(let ([fib (U/ao f)])
(λ (n)
(if (<= n 2)
1
(+ (fib (- n 1))
(fib (- n 2))))))))</code></pre>
<p>And this is all fine <code>((U fibber) 10)</code> is <code>55</code> (and terminates!).</p>
<p>Purists can then turn <code>let</code> into <code>λ</code> in the usual way:</p>
<pre><code>(define fibber
(λ (f)
((λ (fib)
(λ (n)
(if (<= n 2)
1
(+ (fib (- n 1))
(fib (- n 2))))))
(U/ao f))))</code></pre>
<p>And this is really all you need to be able to write the macro:</p>
<pre><code>(define-syntax (with-recursive-binding stx)
(syntax-parse stx
[(_ (name:id value:expr) form ...+)
#'(let ([name (U (λ (f)
(let ([name (U/ao f)])
value)))])
form ...)]))</code></pre>
<p>Or, for the pure of heart:</p>
<pre><code>(define-syntax (with-recursive-binding stx)
(syntax-parse stx
[(_ (name:id value:expr) form ...+)
#'((λ (name)
form ...)
(U (λ (f)
((λ (name)
value)
(U/ao f)))))]))</code></pre>
<p>And this works fine:</p>
<pre><code>(with-recursive-binding (fib (λ (n)
(if (<= n 2)
1
(+ (fib (- n 1))
(fib (- n 2))))))
(fib 10))</code></pre>
<h2 id="a-caveat-on-bindings">A caveat on bindings</h2>
<p>One fairly obvious thing here is that there are <em>two</em> bindings constructed by this macro: the outer one, and an inner one of the same name. And these are not bound to the same function in the sense of <code>eq?</code>:</p>
<pre><code>(with-recursive-binding (ts (λ (it)
(eq? ts it)))
(ts ts))</code></pre>
<p>is <code>#f</code>. This matters only in a language where bindings can be mutated: a language with assignment in other words. Both the outer and inner bindings, unless they have been mutated, are to functions which are identical <em>as functions</em>: they compute the same values for all values of their arguments. In fact, it’s hard to see what purpose <code>eq?</code> would serve in a language without assignment.</p>
<p>This caveat will apply below as well.</p>
<h2 id="two-versions-of-u-for-many-functions">Two versions of U for many functions</h2>
<p>The obvious generalization of U, U*, to many functions is that \(U^*(f_1, \ldots, f_n)\) is the tuple \((f_1(f_1, \ldots, f_n), f_2(f_1, \ldots, f_n), \ldots)\). And a nice way of expressing that in Racket is to use multiple values:</p>
<pre><code>(define (U* . fs)
(apply values (map (λ (f)
(apply f fs))
fs)))</code></pre>
<p>And we need the applicative-order one as well:</p>
<pre><code>(define (U*/ao . fs)
(apply values (map (λ (f)
(λ args (apply (apply f fs) args)))
fs)))</code></pre>
<p>Note that U* is a true generalization of U: <code>(U f)</code> and <code>(U* f)</code> are the same.</p>
<h2 id="using-u-to-construct-mutually-recursive-functions">Using U* to construct mutually-recursive functions</h2>
<p>I’ll work with a trivial pair of functions:</p>
<ul>
<li>an object is a <em>numeric tree</em> if it is a cons and its car and cdr are numeric objects;</li>
<li>an objct is a <em>numeric object</em> if it is a number, or if it is a numeric tree.</li></ul>
<p>So we can define ‘maker’ functions (with an ’-er’ convention: a function which makes an <em>x</em> is an <em>x</em>er, or, if <em>x</em> has hyphens in it, an <em>x</em>-er) which will make suitable functions:</p>
<pre><code>(define numeric-tree-er
(λ (nter noer)
(λ (o)
(let-values ([(nt? no?) (U* nter noer)])
(and (cons? o)
(no? (car o))
(no? (cdr o)))))))
(define numeric-object-er
(λ (nter noer)
(λ (o)
(let-values ([(nt? no?) (U* nter noer)])
(cond
[(number? o) #t]
[(cons? o) (nt? o)]
[else #f])))))</code></pre>
<p>Note that for both of these I’ve raised the call to <code>U*</code> a little, simply to make the call to the appropriate value of <code>U*</code> less opaque.</p>
<p>And this works:</p>
<pre><code>(define-values (numeric-tree? numeric-object?)
(U* numeric-tree-er numeric-object-er))</code></pre>
<p>And now:</p>
<pre><code>> (numeric-tree? 1)
#f
> (numeric-object? 1)
#t
> (numeric-tree? '(1 . 2))
#t
> (numeric-tree? '(1 2 . (3 4)))
#f</code></pre>
<h2 id="wrapping-u-in-a-macro">Wrapping U* in a macro</h2>
<p>The same problem as previously happens when we raise the inner call to <code>U*</code> with the same result: we need to use <code>U*/ao</code>. In addition the macro becomes significantly more hairy and I’m moderately surprised that I got it right so easily. It’s not conceptually hard: it’s just not obvious to me that the pattern-matching works.</p>
<pre><code>(define-syntax (with-recursive-bindings stx)
(syntax-parse stx
[(_ ((name:id value:expr) ...) form ...+)
#:fail-when (check-duplicate-identifier (syntax->list #'(name ...)))
"duplicate variable name"
(with-syntax ([(argname ...) (generate-temporaries #'(name ...))])
#'(let-values
([(name ...) (U* (λ (argname ...)
(let-values ([(name ...)
(U*/ao argname ...)])
value)) ...)])
form ...))]))</code></pre>
<p>And now, in a shower of sparks, we can write:</p>
<pre><code>(with-recursive-bindings ((numeric-tree?
(λ (o)
(and (cons? o)
(numeric-object? (car o))
(numeric-object? (cdr o)))))
(numeric-object?
(λ (o)
(cond [(number? o) #t]
[(cons? o) (numeric-tree? o)]
[else #f]))))
(numeric-tree? '(1 2 3 (4 (5 . 6) . 7) . 8)))</code></pre>
<p>and get <code>#t</code>.</p>
<hr />
<p>As I said, I am sure there are well-known better ways to do this, but I thought this was interesting enough not to lose. This originated as an answer to <a href="https://stackoverflow.com/questions/60460322/implement-a-self-reference-pointer-in-a-pure-functional-language-elm-haskell">this Stack Overflow question</a>.</p>Polkit: waturn:https-www-tfeb-org:-fragments-2020-02-24-polkit-wat2020-02-24T16:41:11Z2020-02-24T16:41:11ZTim Bradshaw
<p>What polkit is, why you should worry about it, some ways to defang it.</p>
<!-- more-->
<h2 id="what-polkit-is">What polkit is</h2>
<p><a href="https://www.freedesktop.org/software/polkit/" title="polkit's home page">Polkit</a><sup><a href="#2020-02-24-polkit-wat-footnote-1-definition" name="2020-02-24-polkit-wat-footnote-1-return">1</a></sup> is part of the <a href="https://www.freedesktop.org/">freedesktop.org</a> project. The <a href="https://www.freedesktop.org/software/polkit/docs/latest/polkit.8.html" title="polkit(8)">documentation for polkit</a> describes what it does:</p>
<blockquote>
<p>polkit provides an authorization API intended to be used by privileged programs (“MECHANISMS”) offering service to unprivileged programs (“SUBJECTS”) often through some form of inter-process communication mechanism. In this scenario, the mechanism typically treats the subject as untrusted. For every request from a subject, the mechanism needs to determine if the request is authorized or if it should refuse to service the subject. Using the polkit APIsu, a mechanism can offload this decision to a trusted party: The polkit authority.</p></blockquote>
<p>In other words, polkit provides a mechanism by which applications can run parts of themselves with elevated privilege, in a similar way that <code>sudo</code> and other mechanisms do. There are no limits to the privilege that can be gained using polkit, and in particular there is nothing preventing it from allowing programs to run as any user, including <code>root</code> via the <a href="https://www.freedesktop.org/software/polkit/docs/latest/pkexec.1.html" title="pkexec(8)"><code>pkexec</code></a> utiity. As well as polkit’s own documentation the <a href="https://en.wikipedia.org/wiki/Polkit" title="Wikipedia entry">Wikipedia article</a> on it is fairly good.</p>
<p>An example of the sort of problem that polkit wants to solve, I think, is that it’s desirable that someone using a desktop system should be able to turn it off without needing to be a privileged user. But it’s rather <em>undesirable</em> that someone using the same machine via <code>ssh</code> for instance should be able to turn it off, <em>even if they are the same user</em>. So there needs to be some framework which lets you express the idea that ‘if this person is using a GUI on the console of this machine, they should be able to shut it down, but they should not be able to do that if they are not using the GUI on the console (for instance, they should almost certainly not be able to set up a <code>cron</code> or <code>at</code> job to turn the machine off)’. There are enough other such operations, such as connecting USB disks to machines, which need to have similar controls around them to make a general framework worth having.</p>
<p>Polkit ships as part of the basic installs of several Linux distributions, including (but not limited to):</p>
<ul>
<li>RHEL 7;</li>
<li>Ubuntu 19.10 (older version of polkit);</li>
<li>CentOS 7 & 8.</li></ul>
<p>Polkit is included as part of server as well as desktop installs of these platforms. I’m not sure what purpose it serves on server installs: I suspect that it may be used for device management.</p>
<h2 id="a-simple-example-of-pkexec">A simple example of pkexec</h2>
<p><code>pkexec</code> is a command-line tool which uses <code>polkit</code> to decide whether a user is allowed to run a command as another user, with that other user being, by default, <code>root</code>:</p>
<pre class="brush: bash"><code>$ groups
tfb wheel
$ id -u
1000
$ pkexec id -u
==== AUTHENTICATING FOR org.freedesktop.policykit.exec ====
Authentication is needed to run `/usr/bin/id' as the super user
Authenticating as: Tim Bradshaw (tfb)
Password:
==== AUTHENTICATION COMPLETE ====
0
$</code></pre>
<p>So you can see that <code>pkexec</code> is doing the same thing that <code>sudo</code> would do: it has some rules which say that <code>tfb</code> is allowed to do things as <code>root</code> and is then asking that user to authenticate themselves. In fact, as configured on the machine this ran on, <code>tfb</code> is allowed to become <code>root</code> by virtue of being in the <code>wheel</code> group (<code>sudo</code> has equivalent rules on this machine).</p>
<h2 id="enough-polkit-to-be-dangerous">Enough polkit to be dangerous</h2>
<p>Polkit is a big complicated system and part of an even bigger and more complicated system: in order to understand it you need to <a href="https://www.freedesktop.org/software/polkit/docs/" title="polkit manuals">read the manuals</a>, and also to understand about how things like <a href="https://www.freedesktop.org/wiki/Software/dbus/" title="D-bus">D-bus</a> work. I don’t understand all of those things, but here is enough information to be able to poke around in the configuration files and get some idea about what is going on. This is not a definitive guide: reading the manuals or the source is the only way to get that.</p>
<p>There have been at least two versions of polkit: I’m mostly describing the newer one here. As of 19.10, Ubuntu still uses an older version.</p>
<h3 id="the-names-of-things">The names of things</h3>
<ul>
<li>An unprivileged program making a request to polkit to do something is known as a <strong>subject</strong>.</li>
<li>What the unprivileged program is asking for is an <strong>action</strong>.</li>
<li>A privileged program which performs an action is a <strong>mechanism</strong>.</li>
<li>The thing that verifies whether a given subject can get a given mechanism to perform a given action is the <strong>authority</strong>.</li>
<li>An <strong>authentication agent</strong> is something which is asked by the authority to get someone or something authenticate themselves.</li></ul>
<h3 id="an-overview-of-polkit">An overview of polkit</h3>
<div class="figure"><img src="/fragments/img/2020/polkit-wat/polkit-overview-20200131.svg" alt="Polkit overview" />
<p class="caption">Polkit overview</p></div>
<p>In this figure:</p>
<ul>
<li>links in red are (usually?) mediated by dbus;</li>
<li><code>polkitd</code> is the authority at the centre of the process, and deals with checking if an action is allowed, and getting authentication for it;</li>
<li>the policies files describe what actions exist;</li>
<li>the rules files provide rules which tell you if a given requested action should be allowed.</li></ul>
<p>The most important part of the process is <code>polkitd</code>, together with the rules and policies files it consults.</p>
<p>I am fairly sure that the requesting program (subject) and the privileged program (mechanism) can be the same: this is the case for <code>pkexec</code> for instance. However it could be the intent is that the subject is whatever invoked <code>pkexec</code> in this case.</p>
<h3 id="polkitd">polkitd</h3>
<p><code>polkitd</code> is the daemon which is at the centre of polkit. Its job is to serve as the authority: it answers the question of whether a given request should be allowed or not and deals with any required authentication by talking to an authentication agent. <code>polkitd</code> does not itself have any particular privilege, and runs as the <code>polkitd</code> user: the questions it answers can be very critical to security however.</p>
<p><code>polkitd</code> is configured by two sets of files:</p>
<ul>
<li>policy files, also known as action files which describe what sort of ‘actions’ polkit knows about;</li>
<li>rules files, which describe the conditions under which a given action should be allowed.</li></ul>
<h3 id="policy-files">Policy files</h3>
<p>Policy files live in the <code>/usr/share/polkit-1/actions/</code> directory, and have extension <code>policy</code>. All the files in that directory are read, and I’m reasonably sure that <code>polkitd</code> watches for changes in the directory and reads or rereads things appropriately.</p>
<p>Policy files are XML, and their content is described in <a href="https://www.freedesktop.org/software/polkit/docs/latest/polkit.8.html" title="polkit(8)">polkit(8)</a>. The important elements are <code><action></code>s, which specify what the actions are. A given policy file can specify many actions. Because the files are XML and also because they often have a lot of internationalisation support they are fairly hard to read. However there’s a nice utility called <code>pkaction</code> which will tell you what actions exist and display them in a more readable format: <code>pkaction</code> on its own will list all of the available actions and <code>pkaction --verbose</code> will display details about them. You can also use the <code>--action-id</code> option to specify an individual action to display, as here:</p>
<pre class="brush: bash"><code>$ pkaction --verbose --action-id org.freedesktop.policykit.exec
org.freedesktop.policykit.exec:
description: Run a program as another user
message: Authentication is required to run a program as another user
vendor: The polkit project
vendor_url: http://www.freedesktop.org/wiki/Software/polkit/
icon:
implicit any: auth_admin
implicit inactive: auth_admin
implicit active: auth_admin </code></pre>
<p>This corresponds to the following XML fragment<sup><a href="#2020-02-24-polkit-wat-footnote-2-definition" name="2020-02-24-polkit-wat-footnote-2-return">2</a></sup>:</p>
<pre><code><action id="org.freedesktop.policykit.exec">
<description>Run a program as another user</description>
<message>Authentication is required to run a program as another user</message>
<defaults>
<allow_any>auth_admin</allow_any>
<allow_inactive>auth_admin</allow_inactive>
<allow_active>auth_admin</allow_active>
</defaults>
</action></code></pre>
<p>The <code>org.freedesktop.policykit.exec</code> action is the one that <code>pkexec</code> uses to do things: the policy file that specifies it is probably <code>/usr/share/polkit-1/actions/org.freedesktop.policykit.policy</code>.</p>
<p>The interesting part of action specifications in policy files is their defaults: these tell you what is required to perform the action in various circumstances. <code>pkaction</code> reports these defaults as <code>implicit ...</code> at the end. It’s not completely clear from the documentation, but I strongly assume that these are <em>minimum</em> requirements for the action to be performed. In the example above, anything requesting the action is required to authenticate as an administrative user, and that authentication is not remembered for any period.</p>
<p>Additionally there can be annotations added, which are key/value pairs which let you specify various things like paths.</p>
<h3 id="rules-files">Rules files</h3>
<p>Rules files live in two locations: <code>/etc/polkit-1/rules.d</code> and <code> /usr/share/polkit-1/rules.d</code>, and have extension <code>rules</code>. All files in both directories are read, after being sorted in lexical order by filename, with files in <code>/etc</code> being read first when there’s a tie. The daemon watches for changes in the directories and rereads everything in that case.</p>
<p>The contents of rules files is JavaScript. Polkit defines an object called <code>polkit</code> and there are various methods on this object which do useful things:</p>
<ul>
<li><code>addRule(fn)</code> adds a rule, which is a function which, given arguments representing an action and a subject, is responsible for saying if the action is allowed and what authorisation is needed to run it;</li>
<li><code>addAdminRule(fn)</code>adds a rule — a function again — which gets to say what counts as being an administrator;</li>
<li><code>log(message)</code>will log things in some suitable way;</li>
<li><code>spawn(argv)</code> will spawn a program, capturing its output.</li></ul>
<p>The functions added by <code>addRule</code> are called in the order they were added, until one returns a non-null result, which can either unconditionally allow or deny the action, or require authorisation of various kinds.</p>
<p>The functions added by <code>addAdminRule</code> are called in the order they were added until one returns a description of what an administrator is.</p>
<p>These functions can call <code>polkit.log(...)</code> to log things and <code>polkit.spawn(...)</code> to run programs.</p>
<p>There are bounds on how long a rule may run for, and also on how long programs spawned by <code>polkit.spawn(...)</code> can run for.</p>
<p>More details on the rules files are in <a href="https://www.freedesktop.org/software/polkit/docs/latest/polkit.8.html" title="polkit(8)">the documentation</a>.</p>
<h3 id="example-rules-and-actions">Example rules and actions</h3>
<p>Here is a sample rule which tries to require administrator authentication to run <code>pkexec</code>:</p>
<pre class="brush: js"><code>polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.policykit.exec") {
polkit.log("pkexec rule hit\n");
return polkit.Result.AUTH_ADMIN;
} else {
polkit.log("pkexec rule missed\n");
return polkit.Result.NOT_HANDLED;
}});</code></pre>
<p>If this is installed as, for instance <code>/usr/share/polkit-1/rules.d/00-pkexec.rules</code> then it will try to ensure that anyone trying to use <code>pkexec</code> requires administrator authorisation (equivalently: is required to authenticate themselves as an administrator). Since it is almost certainly first in the sort order, it also gets to control things before any other rules get their hands on things.</p>
<p>Except this rule does not work: it <em>does</em> catch actions whose id is <code>org.freedesktop.policykit.exec</code>, but these are <em>not</em> the only actions which <code>pkexec</code> can use: it can also use actions which have an <code>org.freedesktop.policykit.exec.path</code> annotation. For instance this policy file</p>
<pre class="brush: html"><code><?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE policyconfig PUBLIC "-//freedesktop//DTD polkit Policy Configuration 1.0//EN"
"http://www.freedesktop.org/software/polkit/policyconfig-1.dtd">
<policyconfig>
<vendor>The sinister TFEB organisation</vendor>
<vendor_url>https://www.tfeb.org/</vendor_url>
<action id="org.tfeb.tc.explode">
<description>Explode</description>
<message>Authentication is not required to explode</message>
<annotate
key="org.freedesktop.policykit.exec.path">/usr/sbin/explode</annotate>
<defaults>
<allow_any>yes</allow_any>
<allow_inactive>yes</allow_inactive>
<allow_active>yes</allow_active>
</defaults>
</action>
</policyconfig>
</code></pre>
<p>will allow <code>/usr/sbin/explode</code> to be run by <code>pkexec</code> with no authentication at all:</p>
<pre class="brush: bash"><code>$ /usr/sbin/explode
exploded as UID 1000 GID 1000
$ pkexec /usr/sbin/explode
exploded as UID 0 GID 0</code></pre>
<p>To catch this, one approach is to rely on the fact that the <code>Action</code> objects passed to the rule have properties which can be looked up with a <code>lookup</code> method, and <code>pkexec</code> sets a <code>program</code> property. So the following version of the above rule should catch all <code>pkexec</code> rules:</p>
<pre class="brush: js"><code>polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.policykit.exec"
|| action.lookup("program")) {
polkit.log("pkexec rule hit\n");
return polkit.Result.AUTH_ADMIN;
} else {
polkit.log("pkexec rule missed\n");
return polkit.Result.NOT_HANDLED;
}});</code></pre>
<p>A similar rule can simply disable <code>pkexec</code> altogether<sup><a href="#2020-02-24-polkit-wat-footnote-3-definition" name="2020-02-24-polkit-wat-footnote-3-return">3</a></sup>:</p>
<pre class="brush: js"><code>polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.policykit.exec"
|| action.lookup("program")) {
polkit.log("pkexec rule hit\n");
return polkit.Result.NO;
} else {
polkit.log("pkexec rule missed\n");
return polkit.Result.NOT_HANDLED;
}});</code></pre>
<p>And now:</p>
<pre class="brush: bash"><code>$ pkexec /usr/sbin/explode
Error executing command as another user: Not authorized
This incident has been reported.</code></pre>
<h2 id="why-polkit-is-a-security-disaster">Why polkit is a security disaster</h2>
<p>There are at least two reasons why the way polkit works is a security disaster:</p>
<ul>
<li>expressing rules in JavaScript (or any general programming language) is a terrible idea;</li>
<li>the implementation is deficient.</li></ul>
<h3 id="writing-rules-in-a-general-purpose-language-is-a-terrible-idea">Writing rules in a general-purpose language is a terrible idea</h3>
<p>It might seem like a clever idea to write rules in JavaScript:</p>
<ul>
<li>using a general-purpose programming language means that very general rules can be implemented;</li>
<li>given that decision JavaScript is a common language which is not entirely awful.</li></ul>
<p>But in fact this is a terrible idea, just <em>because</em> it means that very general rules can be implemented. In particular <strong>it is not possible, even in principle, to statically determine what polkit will allow or deny</strong>. JavaScript is a fully-fledged programming language which means that the only way you can know what a program will do, in general, is to run it. There is, at least, no halting problem since the execution time of the rules is bounded, but all of the other problems associated with general-purpose programming languages are still present.</p>
<p>What this means is that any kind of security analysis of a system needs to</p>
<ul>
<li>check the rules are valid JavaScript, which can be done statically;</li>
<li>check what the rules do, which can’t be done statically, but requires the rules to be run.</li></ul>
<p>A possible counter argument to this is</p>
<blockquote>
<p>Well, only very simple rules will ever be written: no-one is actually going to make use of all this power. In particular the rules people actually write will be so simple that they can in fact be analysed statically.</p></blockquote>
<p>That’s exactly the same argument as</p>
<blockquote>
<p>Well, no-one is ever going to do anything bad, so they can all have the root password.</p></blockquote>
<p>and it’s equally stupid. Secure systems should make it <em>impossible</em> to do things which are undesirable, not rely on people just not doing them. The language in which rules are expressed should be just expressive enough that allows the options needed, but no more expressive than that, and it should certainly always be possible to statically analyse a rule to know what it will allow. Using a general-purpose programming language for rules is just dumb.</p>
<p>Just to drive home this point it turns out that the rules supplied with the system are indeed mildly hard to analyse: here is <code>/etc/polkit-1/rules.d/49-polkit-pkla-compat.rules</code> from a CentOS 8 system:</p>
<pre class="brush: js"><code>polkit.addAdminRule(function(action, subject) {
//polkit.log('Starting pkla-admin-identities\n');
// Let exception, if any, propagate to the JS authority
var res = polkit.spawn(['/usr/bin/pkla-admin-identities']);
//polkit.log('Got "' + res.replace(/\n/g, '\\n') + '"\n');
if (res == '')
return null;
var identities = res.split('\n');
//polkit.log('Identities: ' + identities.join(',') + '\n');
if (identities[identities.length - 1] == '')
identities.pop()
//polkit.log('Returning: ' + identities.join(',') + '\n');
return identities;
});
polkit.addRule(function(action, subject) {
var params = ['/usr/bin/pkla-check-authorization',
subject.user, subject.local ? 'true' : 'false',
subject.active ? 'true' : 'false', action.id];
//polkit.log('Starting ' + params.join(' ') + '\n');
var res = polkit.spawn(params);
//polkit.log('Got "' + res.replace(/\n/g, '\\n') + '"\n');
if (res == '')
return null;
return res.replace(/\n$/, '');
});</code></pre>
<p>Well, it’s possible to work out what this is doing, if you try hard. But note that, in particular what it is doing is deferring to completely separate programs both to work out who administrative users are, and whether an action should be allowed. So now you need to understand that program as well. And yes, it is doing all sorts of string hacking to parse the output of that program, which is always a really good sign.</p>
<h3 id="the-implementation-is-deficient">The implementation is deficient</h3>
<p>Even given the design, polkit’s implementation is deficient.</p>
<p>The first and most obvious sign of deficiency is that rules can invoke external programs: those programs run as the <code>polkitd</code> user and can do anything it can do, including writing to the filesystem.</p>
<p>If <a href="https://selinuxproject.org/page/Main_Page" title="SELinux">SELinux</a> is enabled on the system (which can be checked with <code>sestatus</code>), and if the correct policy is loaded, then it may well prohibit this, as polkit’s rules run under a policy which prevents them writing to the filesystem. But <code>polkitd</code> doesn’t check that SELinux is enforcing, or that the correct policy is in place: it just blunders on, trusting whatever external programs it runs to be well-behaved.</p>
<p>But this is only the start of the horrors. The actions, and even more so the rules that <code>polkitd</code> uses are security-critical. If I can install an early rule such as, for instance</p>
<pre class="brush: js"><code>polkit.addRule(function(action, subject) {
return polkit.Result.YES;
});</code></pre>
<p>then I have completely bypassed security on the system, because <code>pkexec</code> will let me do anything with no authentication at all.</p>
<p>So polkit, and specifically <code>polkitd</code> should be very careful about the ownership and permissions of the files and directories it looks at. In particular everything in the path down to any file it looks at should be owned by a privileged user and writable only by that user, and <code>polkitd</code>. That user should almost certainly be <code>root</code>. <code>polkitd</code> should check this every time it reads anything.</p>
<p>It doesn’t do that. In fact it doesn’t check at all:</p>
<pre class="brush: bash"><code>$ id
uid=1000(tfb) gid=1000(tfb) groups=1000(tfb),10(wheel)
$ pwd
/usr/share/polkit-1/rules.d
$ ls -ld .
drwxrwx---. 2 polkitd tfb 80 Feb 24 14:23 .
$ cat > 00-bypass.rules
polkit.addRule(function(action, subject) {
return polkit.Result.YES;
});
$ pkexec
#
</code></pre>
<p>In the presence of a massive, easily-detectible, security compromise like this, <code>polkitd</code> should refuse to do anything at all and log security alerts. It doesn’t: it just blunders on.</p>
<p>Finally, the default owner of, for instance, <code>/usr/share/polkit-1/rules.d/</code> is <code>polkitd</code>: this might seem reasonable, except that it means that any external program spawned by a rule could, for instance <em>write a rule</em> (unless SELinux prevents this, which it will only do if it’s enabled). This is an acceptable risk only if you assume that no external program is ever compromised, even momentarily, and that if it is then all is immediately lost. It would also help if rules were easy to analyse: it’s quite possible to imagine a rule which could be persuaded to execute some program of an attacker’s choosing. This is all just extremely brittle: secure systems are not brittle.</p>
<p>I found these problems on rather casual inspection of polkit. There may very well be others, and I’d assume since I found these so easily that there are.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Polkit is yet another mechanism which allows privilege escalation on Linux systems: it has functionality broadly equivalent to programs like <a href="https://www.sudo.ws/" title="sudo"><code>sudo</code></a>. Every additional mechanism for privilege escalation increases the attack surface of the system and increases the burden on people who need to ensure the security of systems, and is thus undesirable of itself.</p>
<p>Additionally, polkit:</p>
<ul>
<li>is significantly complicated;</li>
<li>has rules which govern privileged access which can’t be statically analysed in general by design, and which can invoke arbitrary programs during their evaluation;</li>
<li>has serious security problems in its implementation.</li></ul>
<p>Polkit almost certainly contains other security problems. Red Hat, and probably other vendors, now ship polkit as part of core installs and will not support systems without it<sup><a href="#2020-02-24-polkit-wat-footnote-4-definition" name="2020-02-24-polkit-wat-footnote-4-return">4</a></sup>. This means it’s hard to remove: a safe approach is therefore to defang it by installing a rule which simply denies access altogether: install a file in <code>/etc/polkit-1/rules.d/00-defang.rules</code> which contains</p>
<pre class="brush: js"><code>polkit.addRule(function(action, subject) {
return polkit.Result.NO;
});</code></pre>
<p>Such a rule should minimise the security risk from polkit, if it can’t be removed.</p>
<hr />
<h2 id="appendices">Appendices</h2>
<h2 id="disclaimer">Disclaimer</h2>
<p>All of this is what I’ve worked out by playing around with polkit. Any of it may be wrong, and in particular all of the rules or actions above are only samples: you should check them yourself, and I’m not responsible if they don’t work.</p>
<h3 id="dealing-with-no-session-for-cookie-errors-from-pkexec">Dealing with ‘No session for cookie’ errors from pkexec</h3>
<p>If this happens<sup><a href="#2020-02-24-polkit-wat-footnote-5-definition" name="2020-02-24-polkit-wat-footnote-5-return">5</a></sup>:</p>
<pre class="brush: bash"><code>$ pkexec id -u
==== AUTHENTICATING FOR org.freedesktop.policykit.exec ====
Authentication is needed to run `/usr/bin/id' as the super user
Authenticating as: Tim Bradshaw (tfb)
Password:
polkit-agent-helper-1: error response to PolicyKit daemon: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: No session for cookie
==== AUTHENTICATION FAILED ====
Error executing command as another user: Not authorized
This incident has been reported.</code></pre>
<p>then this seems to be because of some problem with the authentication agent. Here is a terrible hack to make it work so you can test things.</p>
<ol>
<li>Open another terminal window to the same machine.</li>
<li>In the main terminal window find the PID of the shell by <code>echo $$</code>.</li>
<li>In the second window run <code>pkttyagent --process PID</code>, using the PID from the previous step.</li>
<li>When you authenticate you will now get prompted by the <code>pkttyagent</code> running in the second window.</li></ol>
<p>Yes, this is as horrid as it sounds, but it’s enough to get by.</p>
<h2 id="wat">Wat?</h2>
<p><a href="https://www.destroyallsoftware.com/talks/wat" title="wat">Wat</a>.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2020-02-24-polkit-wat-footnote-1-definition" class="footnote-definition">
<p>Previously known as ‘PolicyKit’. <a href="#2020-02-24-polkit-wat-footnote-1-return">↩</a></p></li>
<li id="2020-02-24-polkit-wat-footnote-2-definition" class="footnote-definition">
<p>The actual XML is more complicated than this as it includes versions of the description & message in several languages. The <code><action></code> element is also not the top-level element. <a href="#2020-02-24-polkit-wat-footnote-2-return">↩</a></p></li>
<li id="2020-02-24-polkit-wat-footnote-3-definition" class="footnote-definition">
<p>DISCLAIMER: while I believe this rule disables <code>pkexec</code> completely, I don’t warrant that it does: <em>caveat emptor</em>. <a href="#2020-02-24-polkit-wat-footnote-3-return">↩</a></p></li>
<li id="2020-02-24-polkit-wat-footnote-4-definition" class="footnote-definition">
<p>This raises questions about the approach of these companies to security, of course, which I’m not addressing here. <a href="#2020-02-24-polkit-wat-footnote-4-return">↩</a></p></li>
<li id="2020-02-24-polkit-wat-footnote-5-definition" class="footnote-definition">
<p>This seems to be a problem with RHEL 8, but not RHEL 7 (based on experiments with CentOS 8 & 7 respectively). <a href="#2020-02-24-polkit-wat-footnote-5-return">↩</a></p></li></ol></div>Those who control the presenturn:https-www-tfeb-org:-fragments-2020-02-19-those-who-control-the-present2020-02-19T16:09:58Z2020-02-19T16:09:58ZTim Bradshaw
<p>‘Those who control the present can rewrite the past’ — Anne Fortier</p>
<!-- more-->
<hr />
<p>‘The free trade agreement we will have to do should be one of the easiest in human history’ — Liam Fox, 2016</p>
<p>‘The day after we vote to leave we hold all the cards and we can choose the path we want’ — Michael Gove, 2016</p>
<p>‘There will continue to be free trade and access to the single market’ — Boris Johnson, 2016</p>
<p>‘Not a single job would be lost because of Brexit’ — Lord Digby Jones, 2016</p>
<p>‘After we vote Leave, we would immediately be able to start negotiating new trade deals with emerging economies and the world’s biggest economies which could enter into force immediately after the UK leaves the EU’ — Leave Campaign, 2016</p>
<p>‘We will maintain a free flowing border at Dover. We will not impose checks at the port. The only reason we would have queues at the border is if we put in place restrictions that created those queues. We are not going to do that’ — Chris Grayling, 2018</p>
<p>‘Absolutely nobody is talking about threatening our place in the single market’ — Daniel Hannan, 2016</p>
<hr />
<p>How long will it be before these statements were never made?</p>The death of hopeurn:https-www-tfeb-org:-fragments-2020-01-12-the-death-of-hope2020-01-12T11:36:29Z2020-01-12T11:36:29ZTim Bradshaw
<p>In 2016 you voted for brexit. But you voted for it because the leave campaign lied to you, of course: not because you didn’t like foreigners very much and didn’t care very much about your children’s future.</p>
<!-- more-->
<p>Of course you didn’t vote for those reasons: how insulting and arrogant of me to even suggest you might have, to suggest that you might be even a little bit selfish or even a little bit racist. Instead I am meant to believe that you are so stupid that you believed the lies you were told.</p>
<p>Well, I don’t believe that. I don’t think you are stupid: I think you knew what you were asking for in 2016, whatever you might claim.</p>
<p>On the 12th of December 2019, enough of you voted for brexit again to ensure it happens. This time you don’t get to claim you were lied to: you knew what brexit means because you are indeed not stupid and you have not been fooled by the lies the laughing clown and the people pulling his strings tell. This time you know that brexit means that your children will not have the opportunities you had, that we will not fix climate change and that your grandchildren will live with the terrible consequences of that failure. But you don’t care about your children & still less about their children, do you? All you care about is yourself and that there should be less foreign people: you really don’t like foreign people, do you? And you will get your racist fantasies fulfilled even if it means murdering your own children.</p>
<p>And when it’s done, when the nasty foreigners are gone and somehow your dream racist empire has not arisen, who are you going to blame then? Who will be the next group to be eliminated? The Gypsies first, I expect, and then the Jews and anyone not ‘English’ enough for you, the ‘liberals’, the ‘deviants’, the scientists and anyone who wanted to remain in the EU, anyone who wanted there to be hope. Especially them.</p>
<p>Finally, when they are all gone (who will wonder where they have gone?), your empire of mud will be complete and you will turn on each other.</p>
<p>It has been coming for a long time, but this is the moment when hope for the future died. You are old and you know there is no hope for you: your future holds only slow physical and mental decline, with death at the end of it, as does mine. But you can’t bear to think that there are other, younger, people who might have less dark futures ahead of them than yours. So you have put out the lamps of their futures along with those of your own. Because there is no light in your life you have extinguished the light of the world.</p>
<p>How dare you?</p>Burning the futureurn:https-www-tfeb-org:-fragments-2019-11-25-burning-the-future2019-11-25T12:26:04Z2019-11-25T12:26:04ZTim Bradshaw
<p>Whatever you think about brexit, there is something which matters more. And brexit is not compatible with that thing.</p>
<!-- more-->
<p>Boris Johnson is lying when he says he will ‘get brexit done’: it’s always been obvious that <a href="https://www.bbc.co.uk/news/uk-50222315" title="General Election 2019: What does 'Get Brexit done' mean?">brexit will take years</a>. The much-trumpeted deal — if the UK leaves with a deal — covers only a small proportion of what needs to be done as the UK leaves the EU. After the UK leaves it needs to sort out not only what its future relationship will be with the EU but also all the other relationships it has not had to address independently for more than a generation. This is lot of work.</p>
<p>Well: this would be a lot of work if the UK had recent experience of conducting such negotiations on its own behalf. It doesn’t, as it has been able to rely on being part of the EU for a very long time. Everyone who knew how to do this in the UK has retired or died. This has become very apparent over the time since the referendum, as the UK has made a laughing stock of itself in the most public way possible. Indeed, some people seem to have forgotten even that negotiating complicated deals is hard: Liam Fox famously said that</p>
<blockquote>
<p>The free trade agreement that we will have to do with the European Union should be one of the easiest in human history.</p></blockquote>
<p>If we assume he was not just lying or bullshitting then his ignorance is fairly astonishing: he was the secretary of state for international trade at the time, and clearly just had <em>no idea</em> of the effort involved. There are really only two choices here: either he was hugely incompetent to do his job, or he was lying.</p>
<p>So the negotiations will not be a lot of work: they will be an overwhelming amount of work as the entire organisation in the UK has to relearn skills it has forgotten, all while negotiating in many cases with larger entities with current experience, such as the EU.</p>
<p>The end result of this is that brexit is going to take essentially all of the available resources of the UK government and civil service for many years: conservatively a decade. During that time a lot of other things which need to be done simply won’t happen. In fact, that’s something like the best case: anyone who has ever done a job which is really several jobs knows what can happen when the load of things to do becomes so overwhelming that even working out what to do next becomes impossible. When that happens everything just collapses, and essentially nothing gets done. That can happen to organisations as well, and it’s equally bad: the term for this in extreme cases is ‘failed state’.</p>
<p>This will all be particularly bad if there is some important task which can’t wait: something which needs to be done in the next decade, if it is to be done at all.</p>
<p>There is. We have about a decade (perhaps rather less, perhaps a little more) to start dealing with anthropogenic climate change in a really serious way. The IPCC special report <a href="https://www.ipcc.ch/sr15/" title="Global Warming of 1.5ºC">Global Warming of 1.5ºC</a> makes this very clear, and in particular it is easy to play with the <a href="https://apps.ipcc.ch/report/sr15/fig1/index.html" title="interactive figure">interactive figure</a>, which lets you explore how the future temperature increase depends on the year that net-zero emissions is reached. That figure, in fact, is interestingly optimistic: the very worst case it allows is for net-zero emissions to be reached in 2100. Currently there is no real indication that net-zero will be reached <em>at all</em>.</p>
<p>Dealing with climate change, if it’s to be done at all, will need a huge, international effort, and brexit means that the UK will play essentially no part in that effort.</p>
<p>At this point you probably expect a huge section on why anthropogenic climate change is in fact a real thing. But, really, I can’t be bothered writing that: if you don’t believe in climate change then you should just <a href="https://history.aip.org/climate/index.htm" title="The Discovery of Global Warming">do</a> <a href="https://skepticalscience.com/">some</a> <a href="https://www.ipcc.ch/">reading</a> and stop listening to people who are being paid to lie to you by the oil companies. The truth is out there, but it does not involve aliens.</p>
<p>So, well, OK, what about the people who know it’s a real problem but still don’t bother, because they think that brexit, or having a nice new car, or their next holiday, or, really, anything, is more important? People like everyone’s favourite clown prince, Boris Johnson, who <a href="https://www.bbc.co.uk/news/election-2019-50596192" title="General election 2019: Row over Boris Johnson debate 'empty chair'">didn’t bother turning up to a debate on it</a> and then threatened the organisation holding the debate. I don’t know, perhaps he’s just a coward and was too frightened to face the other party leaders. But I don’t think it was just that: I think that he and his party just don’t care about climate change. Getting rid of nasty foreigners and bringing back the glorious British empire (which, of course, will be run by the English, lead by the great Boris before whom all will kneel) is just much more important to him than his children’s future. And this is the case for lots of people: climate change is this slow thing which doesn’t really matter much <em>now</em> and by the time it does matter, well, that’s a long way away and someone else will deal with it and who really cares about their children anyway?</p>
<p>And that’s the thing: if you don’t care about climate change, what you really mean is that you <em>do not care about your own children’s future</em>. If you think brexit is more important than climate change then you are burning your own children’s lives on a fire you have built specially for the purpose.</p>
<p>If you think that brexit and dealing with climate change are compatible then you’re naïve: if you think that brexit is more important than climate change then fuck you.</p>
<p>Brexit or your children’s future: pick one.</p>Clown fascistsurn:https-www-tfeb-org:-fragments-2019-08-28-clown-fascists2019-08-28T19:18:24Z2019-08-28T19:18:24ZTim Bradshaw
<p>Welcome to the age of the clown fascists.</p>
<!-- more-->
<p>Just as dangerous and unpleasant as traditional fascists but, you know, clowns as well. With mad, blond clown hair, some of it even real. And, like all the best clowns, really fucking creepy: either making not-jokes about wanting to have sex with their own daughter, or having creepy relationships with much younger women who they almost certainly are not beating up. Or, who knows, both.</p>
<p>And the Amazon is burning.</p>
<p>How did we get here?</p>1C1L1T1Yurn:https-www-tfeb-org:-fragments-2019-07-30-1c1l1t1y2019-07-30T11:20:04Z2019-07-30T11:20:04ZTim Bradshaw
<p>One camera, one lens, one theme, one year: one way to be less bad as a photographer.</p>
<!-- more-->
<p>This is a slightly indirect answer because it does not really say anything concrete about photography, but it is worth knowing I think.</p>
<h2 id="popular-people">Popular people</h2>
<p>There’s a well-known phenomenon which I am sure has a name, which is that a typical person tends to know people who are <em>more popular</em> than them. This seems odd at first blush, but it’s not: if a typical person knows, say, \(n\) people, then a very popular person might know \(3n\) people say. The statistics then work out that many of the people the typical person knows are more popular than they are, simply because popular people know so many more people than they do. And this can cause people to think that, because most of the people they know are very popular, they are failing in some way: they are <em>worse</em> than average, when in fact they are just average.</p>
<h2 id="popular-photographs">Popular photographs</h2>
<p>The same thing happens with photography, except in an even worse way. Firstly the photographs by other people you tend to see will tend to be the photographs that are popular, because that’s how social media works; Secondly, the photographs you see by someone else, are <em>the pictures they like the very best</em>, because those are the only ones they are putting up. But you see <em>all</em> your pictures, including the 90% of them which are just not very good.</p>
<p>So now you have three things working against you:</p>
<ul>
<li>you see mostly pictures by photographers that lots of people like;</li>
<li>you only see the very best pictures by these people;</li>
<li>and finally you’re a beginner, so you are <em>really are</em> not very good yet.</li></ul>
<p>The result of this is that you’ll just end up thinking that all your pictures are rubbish, and get demotivated, give up and become a dentist or something (well, now you can afford a very expensive camera, anyway, which you will eventually sell and I’ll buy cheaply: thanks!).</p>
<p>There is no magic solution to this, and in particular there is no <em>quick</em> solution: getting good at anything takes time. Here is an approach which works for at least some people.</p>
<h2 id="the-1c1l1t1y-structure">The 1C1L1T1Y structure</h2>
<p><strong>1C1L.</strong> First of all pick a camera and lens: just one of each. It does not matter very much what you pick, but you might want to be informed by the next step. It is allowed to <em>buy</em> a camera and/or lens in this step, but you may get extra points for not doing so.</p>
<p><strong>1T.</strong> Now pick a theme: something you are interested in taking pictures of. A theme might be ‘street photographs’ or ‘macro pictures of moss in walls’ or ‘nightclub photographs’ or ‘abandoned buildings’: it does not matter, but it should be something you actually <em>want to do</em> and something you <em>can</em> do — don’t pick ‘street photographs’ if you live 100 miles from the nearest city!</p>
<p><strong>1Y.</strong> Now you are going to take pictures on this theme, with this one camera and lens, for a year, and you’re going to do it in a rather structured way. It doesn’t have to be a year, although it should be at least a month. You are allowed to take pictures which are not on this theme and not with this camera and lens, but you need to know very clearly when you are not working on the project, and catalogue the images differently. But, again, you get bonus points for working only on the project.</p>
<p><strong>The structure.</strong> You should take some pictures as part of the project <em>every day</em> (it does not have to be every day, but it should most definitely be more frequently than weekly, and if the project is significantly shorter than a year it should be daily). On each day (or time period) you should take few enough pictures that you can look, hard, at all of them: this probably means no more than a hundred (traditionally this would have been a single 35mm roll, 36–39 pictures, and that’s a good number). There is <em>no point</em> in taking so many pictures you can not look hard at them all, because you are going to need to do that.</p>
<p>At the end of each day (time period) look, hard, at all the pictures you have made. You can tart them up if you want but you don’t need to do that. Make conscious decisions about which you like and which you don’t as far as you can: try and make conscious decisions about <em>why</em> you like & do not like them. It may help to write notes on this. Pick the one you like best, make a ‘final’ version of it and put it away somewhere. (Traditionally this means: make a contact sheet from your film, pick the frame you like best, make the best print you can, put it in a box). Once you have done this <em>you should not look at either the pictures you did not select or the one you did again during the project</em>. This is important.</p>
<p>Iterate this for a year (or however long you are doing it for). Just keep at it: it will sometimes be boring and you will feel you are getting nowhere, but keep going. Do <em>not</em> look at the selected pictures you made earlier in the project.</p>
<p>At the end of the project, do two things.</p>
<p>Get all the selected pictures, and look at them, one at a time, <em>in order</em>. You will (almost certainly) find that the earlier ones are rubbish, and the later ones are increasingly good. You may well find dips & peaks on the way where you got sucked in to something which turned out to be a dead-end and then found your way out.</p>
<p>Go through (not too quickly: remember humans can’t take in thousands of images in a short period of time, so you ) the images did <em>not</em> select, and see if you would select the same ones, or if there are things in there you did not even see at the start of the project: chances are there will be.</p>
<p>If you made notes as you went along, look at them along with the appropriate pictures and decide if you agree with your earlier self.</p>
<hr />
<p>This approach is not going to make you a brilliant photographer: but the chances are very good it will make you a <em>better</em> photographer, and it will also help you realise that you are <em>improving over time</em>. Finally it may help you work out what you actually want to make pictures of.</p>
<p>Finally, this approach is stolen from various ideas by <a href="https://theonlinephotographer.typepad.com/">Mike Johnston</a> who is very worth reading on this and many other matters (seriously: read his blog, it’s worth it). In particular see his <a href="https://theonlinephotographer.typepad.com/the_online_photographer/2009/05/a-leica-year.html">Leica year</a> article & related articles. <em>It doesn’t have to be a Leica</em>, and in particular, in my version of the project, you’re strongly encouraged to use the gear you have.</p>Seeing Apollourn:https-www-tfeb-org:-fragments-2019-07-24-seeing-apollo2019-07-24T14:49:53Z2019-07-24T14:49:53ZTim Bradshaw
<p>Why you can’t see the Apollo lunar landing sites from Earth.</p>
<!-- more-->
<p>Something that Apollo denialists<sup><a href="#2019-07-24-seeing-apollo-footnote-1-definition" name="2019-07-24-seeing-apollo-footnote-1-return">1</a></sup> sometimes say is: if the Apollo programme put people on the Moon, why can’t we see the landing sites?</p>
<p>Well, we can. In 2009, the Lunar Reconnaissance Orbiter (LRO), in orbit around the Moon, <a href="https://www.nasa.gov/mission_pages/LRO/multimedia/lroimages/apollosites.html">took pictures of some of the Apollo landing sites</a>, including an <a href="https://www.nasa.gov/mission_pages/LRO/multimedia/lroimages/lroc_20090903_apollo12.html">astonishing picture</a> of the Apollo 12 landing site in which you can see Surveyor 3 as well as clear signs of tracks made by the astronauts as they walked on the Moon. It is even possible to work out <a href="https://www.hq.nasa.gov/alsj/ApolloFlags-Condition.html">what happened to some of the flags</a> & that <a href="https://www.hq.nasa.gov/alsj/a12/a12FlagStillAloft.html">the flag planted by the Apollo 12 astronauts is still standing</a>. In 2011 LRO <a href="https://www.nasa.gov/mission_pages/LRO/news/apollo-sites.html">took some even better pictures</a>: in these images it is very easy to see the tracks left by the astronauts. <a href="https://www.nasa.gov/mission_pages/apollo/revisited/index.html">A summary page</a> points to images of the landing sites of Apollo 11, 12, 14, 15 & 16.</p>
<p>‘But’, they say, ‘you can’t see the sites <em>from Earth</em>, and we don’t believe that LRO actually exists: it’s just part of the giant Apollo conspiracy. If people landed on the moon you would be able to see them with Earth-based telescopes<sup><a href="#2019-07-24-seeing-apollo-footnote-2-definition" name="2019-07-24-seeing-apollo-footnote-2-return">2</a></sup>’.</p>
<p>Here’s why you can’t.</p>
<p>The <a href="https://en.wikipedia.org/wiki/Angular_resolution">angular resolution</a> of a telescope with diameter \(D\) working at a wavelength of \(\lambda\) is given by:</p>
<p>\[\Delta\theta = 1.220 \frac{\lambda}{D}\]</p>
<p>This is the smallest angle it can resolve, even in theory. The \(1.220\) comes from the properties of the <a href="https://en.wikipedia.org/wiki/Airy_disk">Airy discs</a> that correspond to the diffraction patterns of light — in fact it’s the position of the first zero of intensity in the pattern.</p>
<p>For a telescope where \(\Delta\theta \ll 1\) (which is true for anything worth being called a telescope) then we can translate this into an absolute resolution, \(\Delta l\), at a distance \(r\), using \(\sin\theta \approx \theta\) if \(\theta \ll 1\).</p>
<p>\[\Delta l = 1.220 \frac{\lambda r}{D}\]</p>
<p>And we can rearrange this to tell us what \(D\) needs to be to resolve an object of size \(\Delta l\):</p>
<p>\[D = 1.220 \frac{\lambda r}{\Delta l}\]</p>
<p>So, for the LEMs on the Moon:</p>
<ul>
<li>\(\Delta l \approx 9\,\mathrm{m}\) (size of LEM);</li>
<li>\(r \approx 4 \times 10^8\,\mathrm{m}\) (distance to Moon);</li>
<li>\(\lambda \approx 5.6 \times 10^{-7}\,\mathrm{m}\) (green light).</li></ul>
<p>And this gives us \(D \approx 30\,\mathrm{m}\).</p>
<p>This means that to even be capable of resolving the LEM on the moon we would need a telescope with a diameter of thirty metres. This is about three times larger than the largest optical telescopes in the world. Telescopes this large <a href="https://en.wikipedia.org/wiki/Thirty_Meter_Telescope">are planned</a> (and <a href="https://en.wikipedia.org/wiki/Extremely_Large_Telescope">even larger ones too</a>), but they do not exist yet.</p>
<p>And this size is the absolute minimum: to see any kind of detail you would need a truly enormous telescope: perhaps something with \(D\approx 100\,\mathrm{m}\): nobody is building anything like that soon. The LRO can see the Apollo sites because it is in an orbit around the Moon which gets as low as \(20\,\mathrm{km}\): twenty thousand times closer than the Earth is, which means it needs a telescope with a diameter twenty thousand times smaller.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2019-07-24-seeing-apollo-footnote-1-definition" class="footnote-definition">
<p>Apollo denialists were one of the early precursors of the science denialists who are now working so hard to destroy everything worthwhile about being human. <a href="#2019-07-24-seeing-apollo-footnote-1-return">↩</a></p></li>
<li id="2019-07-24-seeing-apollo-footnote-2-definition" class="footnote-definition">
<p>‘Also, the Earth is flat.’ <a href="#2019-07-24-seeing-apollo-footnote-2-return">↩</a></p></li></ol></div>The lessons of Apollourn:https-www-tfeb-org:-fragments-2019-07-23-the-lessons-of-apollo2019-07-23T11:14:07Z2019-07-23T11:14:07ZTim Bradshaw
<p>Tough and competent.</p>
<!-- more-->
<h2 id="the-lessons-we-have-forgotten">The lessons we have forgotten</h2>
<p>Have a plan. Have a plan for what happens if that plan goes wrong. Have a plan for what happens if <em>that</em> plan goes wrong (and go as far as you can down this recursion). Be competent to spot things going wrong and execute these plans in real-time. Accept responsibility for your actions and mistakes.</p>
<h2 id="what-we-do-instead">What we do instead</h2>
<p>Make no plan at all: it will be fine because everything is easy. Do not take responsibility but blame someone else when it is not fine — foreigners, black people, liberals, gays, traitors, democrats, muslims, jews, gypsies, it does not really matter — and work up the mob to hate them.</p>
<h2 id="tough-and-competent">Tough and competent</h2>
<blockquote>
<p>Spaceflight will never tolerate carelessness, incapacity, and neglect. Somewhere, somehow, we screwed up. It could have been in design, build, or test. Whatever it was, we should have caught it.</p></blockquote>
<blockquote>
<p>We were too gung ho about the schedule and we locked out all of the problems we saw each day in our work. Every element of the program was in trouble and so were we. The simulators were not working, Mission Control was behind in virtually every area, and the flight and test procedures changed daily. Nothing we did had any shelf life. Not one of us stood up and said, “Dammit, stop!”</p></blockquote>
<blockquote>
<p>I don’t know what Thompson’s committee will find as the cause, but I know what I find. We are the cause! We were not ready! We did not do our job. We were rolling the dice, hoping that things would come together by launch day, when in our hearts we knew it would take a miracle. We were pushing the schedule and betting that the Cape would slip before we did.</p></blockquote>
<blockquote>
<p>From this day forward, Flight Control will be known by two words: “Tough and Competent.” <em>Tough</em> means we are forever accountable for what we do or what we fail to do. We will never again compromise our responsibilities. Every time we walk into Mission Control we will know what we stand for. <em>Competent</em> means we will never take anything for granted. We will never be found short in our knowledge and in our skills. Mission Control will be perfect.</p></blockquote>
<blockquote>
<p>When you leave this meeting today you will go to your office and the first thing you will do there is to write “Tough and Competent” on your blackboards. It will <em>never</em> be erased. Each day when you enter the room these words will remind you of the price paid by Grissom, White, and Chaffee. These words are the price of admission to the ranks of Mission Control.</p></blockquote>
<p>Gene Krantz, address to his branch and flight control team on the Monday morning following the Apollo 1 disaster, 30 January 1967</p>2020urn:https-www-tfeb-org:-fragments-2019-07-19-20202019-07-19T18:24:52Z2019-07-19T18:24:52ZTim Bradshaw
<p>What sort of people will vote for Trump in 2020, if he lasts that long?</p>
<!-- more-->
<p>Racists will.</p>
<p>After his <a href="https://www.bbc.co.uk/news/world-us-canada-48982172">tirades of the last few days</a> it’s completely clear that Trump is a racist. He’s no longer a ‘dog whistle’ racist: telling people from minorities to ‘go home’ and supporting ‘send her back’ chants at rallies is explicitly & openly racist. And there’s evidence that this is <a href="https://www.bbc.co.uk/news/world-us-canada-49025177">deliberate policy</a>.</p>
<p>Here’s the thing: <em>if you vote for a racist, then you are a racist</em>. There’s a very common argument that somehow people who vote for something didn’t really understand what they were voting for: this argument was made about brexit and about Trump’s election. The argument is, essentially, that the people who voted for, say, Trump or brexit in 2016 were just too stupid to know what it was they were voting for. The argument is that people who disagree with us are stupid, that people who vote for racists aren’t, really, racists, they are just too stupid to understand what they’re voting for. They’re not, you know, <em>bad</em> people, they just have weak minds, unlike our superior strong minds.</p>
<p><em>Really?</em> Do you really think that someone who votes for a man who tells minorities to ‘go home’, and who chant ‘send her back’ at rallies don’t understand what that means? <em>Of course they understand what it means.</em> People who do this are racists — perhaps, perhaps, they are <em>stupid</em> racists, but they are racists — and they are supporting a racist president.</p>
<p>If Trump wins in 2020 then racism & bigotry will have won in the US.</p>I remember Apollourn:https-www-tfeb-org:-fragments-2019-07-16-i-remember-apollo2019-07-16T20:15:08Z2019-07-16T20:15:08ZTim Bradshaw
<p>All serious historians agree that the Apollo programme of the 1960s and early 1970s was the highpoint of western civilisation.</p>
<!-- more-->
<p>There were, of course significant achievements after Apollo — Voyager, the Hubble space telescope and its successors, images of black holes, the development of economic fusion power even, although it was too late. And there was very considerable social progress after Apollo: for nearly 45 years things improved steadily. It is easy to forget this latter fact given the events that came later: the incarceration without trial, forced labour, mass rape and eventual butchery of immigrants, minority groups, people with ‘incompatible’ sexual orientations, journalists, liberals and others who inconvenienced those in power in the mid 2020s now completely overshadows the progress that was made ten years earlier.</p>
<p>The rise of the oligarchs and dictators, with their systematic suppression of the press and all divergent opinion, encouragement of mob rule, stupidity, xenophobia and science denial in the second and third decades of the 21st century was the beginning of the end.</p>
<p>The failed expeditions to Mars in 2024–2025 were both farce and tragedy. Donald Trump, still claiming democratic legitimacy despite the unequivocal results of the 2020 elections, was by this time in the final stages of senile decay: never more than the shell of a human, he was by then no more than a fulminating husk under the direct control of his Russian masters. Musk, a deeply flawed man, comes out as the unlikely hero of the affair: defending the choice of black and female astronauts against Trump’s tirades and demands and, when the outcome of the mission was beyond doubt, volunteering himself. The heroism of the astronauts, knowing they faced, at best, slow death by radiation poisoning on Mars, can not be overstated. In the event, of course, they did not get that far: the live broadcast of the terrible end of the second mission, with the doomed astronauts’ condemnation of the programme and Trump even as their oxygen leaked away, ensured there would be no more although the US was by then losing the technical ability in any case. Musk’s fate remains unknown: it is assumed he was murdered by members of Trump’s family in revenge for his ‘sabotaging’ of the missions.</p>
<p>The two nuclear wars of 2032 (US-China) and 2035 (Russia-China-UK), while limited, killed well over half a billion people. Climate change (denied, of course, by the oligarchs but well-known to be an existential threat by the turn of the millennium) did the rest: the harvest failures of 2040 killed nearly 150 million people in North America alone and marked the effective end of the US which had already been weakened by the war with China and a series of preceding wars (the US won no war it fought after 1945): after 2040 there were never less than two competing presidents claiming authority over what had been the US, and in 2053 there were, briefly, seven.</p>
<p>Reliable information is increasingly scarce after 2055. The Kessler event of 2032–2033, triggered by the intentional destruction of satellites by the US in the US-China war, destroyed essentially all existing satellites and made space inaccessible to humans, possibly for the next few centuries. Planet-wide Earth-based communication systems had been catastrophically damaged in the two wars, and finally collapsed in 2055. So information after 2055 is inevitably somewhat speculative: we simply do not know how many survivors there are in the UK and what their condition is, for instance.</p>
<p>By 2060 the population of the former US was estimated at under ten million, of which no more than a few tens of thousands had access to electricity. Those numbers will be lower now. The UK, long in decline, and latterly little more than vassal state of the US, itself effectively a dictatorship between 2020 and 2040, also essentially ceased to exist in the 2035 war: the estimated surviving population there may now be as few as tens of thousands, mostly in Scotland. The northern areas of continental Europe are still relatively benign, but Italy, Spain, Greece, much of southern France and many other countries have been lost to climate change.</p>
<hr />
<p>Few people are now alive who were alive during the Apollo programme, and fewer still who have any memory of it. Soon there will be no-one alive who remembers it.</p>
<p>But we must remember Apollo: we must remember that a great nation could devote itself to a mission of exploration, not war, and could thus achieve great things, whatever came later. We must remember that this is possible, that hatred, lies and division spread by people with small minds are not the only way. We must remember that, once, there was a project where they could truly say</p>
<blockquote>
<p>that America’s challenge of today has forged man’s destiny of tomorrow. And, as we leave the Moon at Taurus-Littrow, we leave as we came and, God willing, as we shall return, with peace and hope for all mankind. “Godspeed the crew of Apollo 17.”</p></blockquote>
<p>I remember Apollo.</p>
<hr />
<p>Translated from the Japanese, 20690716</p>Democracyurn:https-www-tfeb-org:-fragments-2019-06-21-democracy2019-06-21T10:54:14Z2019-06-21T10:54:14ZTim Bradshaw
<p>Sometime in the middle of 2019, the UK will have a new prime minister. He<sup><a href="#2019-06-21-democracy-footnote-1-definition" name="2019-06-21-democracy-footnote-1-return">1</a></sup> will have considerable power to control whether, when and how the UK leaves the EU.</p>
<!-- more-->
<p>This prime minister will have been selected from a shortlist of two, both representing the same party, by a tiny electorate who can vote only because they have paid money to be able to do so. This electorate are 97% white (the UK as a whole is under 90% white), 71% male (UK as a whole approximately 50%) and far richer than the UK average<sup><a href="#2019-06-21-democracy-footnote-2-definition" name="2019-06-21-democracy-footnote-2-return">2</a></sup>.</p>
<p>Almost certainly this person will be Boris Johnson. Johnson has been sacked, twice, for lying<sup><a href="#2019-06-21-democracy-footnote-3-definition" name="2019-06-21-democracy-footnote-3-return">3</a></sup>, and this is very far from the limit of his lies. He has conspired to beat up a journalist<sup><a href="#2019-06-21-democracy-footnote-4-definition" name="2019-06-21-democracy-footnote-4-return">4</a></sup>. He is the kind of casual racist that people from his social class usually are, having published a column in a newspaper in which he talked about black people as ‘piccaninnies’ with ‘watermelon smiles’. He is an English nationalist bigot, having been the editor of a magazine in which a poem was published suggesting that Scotland be turned into a ghetto and the ‘tartan dwarves’ within it should be exterminated. He has referred to women as ‘hot totty’ and talked about the ‘tottymeter’. To say that he has a long record of offensive behaviour would be putting it rather mildly<sup><a href="#2019-06-21-democracy-footnote-5-definition" name="2019-06-21-democracy-footnote-5-return">5</a></sup>.</p>
<p>Although he is highly-educated in a rather unhelpful area (classics, inevitably at Eton and Oxford), he also seems to be rather stupid. He had to be stopped from reciting a Kipling poem inside a temple in Myanmar<sup><a href="#2019-06-21-democracy-footnote-6-definition" name="2019-06-21-democracy-footnote-6-return">6</a></sup> by the British ambassador: even someone who holds racist views as he does should realise that expressing them in that context is a catastrophically stupid thing to do. Unless, of course, he was simply too stupid to understand what he was doing. He is, in fact, an upper-class twit.</p>
<p>Or, perhaps, not: perhaps he just does not care. There’s a well-known<sup><a href="#2019-06-21-democracy-footnote-7-definition" name="2019-06-21-democracy-footnote-7-return">7</a></sup> quote about him from Max Hastings:</p>
<blockquote>
<p>I’m not sure he’s capable of caring for any human being other than himself.</p></blockquote>
<p>Perhaps, in fact, he was reciting Kipling because he just doesn’t care how much damage he does; because, like Trump, he’s only dimly aware that other people even exist.</p>
<p>Johnson’s supporters are even less typical of the UK than the already tiny, skewed electorate: 85% of his supporters want to leave the EU with no deal compared with 66% within his electorate, and 25% within the UK as a whole<sup><a href="#2019-06-21-democracy-footnote-8-definition" name="2019-06-21-democracy-footnote-8-return">8</a></sup>.</p>
<p>He has suggested, or at least refused to rule out, that he might ‘prorogue’ parliament in order to allow the UK to leave the EU with no deal at the end of October 2019: this means suspending it, so that MPs — the people the UK actually voted for, as opposed to him, who they did not vote for — have no say in what happens.</p>
<p>If that happens, the UK will leave the EU with no deal on Hallowe’en, under the control of a man educated at Eton and Oxford and elected by less than 120,000 people (0.2% of the people entitled to vote in the UK) who were allowed to vote for him because they paid to do so.</p>
<p>This, apparently, is democracy.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2019-06-21-democracy-footnote-1-definition" class="footnote-definition">
<p>Because he will, of course, be a middle-aged white man. <a href="#2019-06-21-democracy-footnote-1-return">↩</a></p></li>
<li id="2019-06-21-democracy-footnote-2-definition" class="footnote-definition">
<p>See <a href="https://www.bbc.co.uk/news/uk-politics-48395211">this article from the BBC</a>, and <a href="https://www.economist.com/britain/2019/06/13/the-question-is-not-who-will-lead-the-conservative-party-but-whether-it-will-survive">this article in <em>The Economist</em></a>. <a href="#2019-06-21-democracy-footnote-2-return">↩</a></p></li>
<li id="2019-06-21-democracy-footnote-3-definition" class="footnote-definition">
<p>Once from a newspaper, and once from his position as shadow arts minister. <a href="#2019-06-21-democracy-footnote-3-return">↩</a></p></li>
<li id="2019-06-21-democracy-footnote-4-definition" class="footnote-definition">
<p>See <a href="https://www.theguardian.com/politics/2009/mar/29/boris-johnson-channel-4">this article</a>. <a href="#2019-06-21-democracy-footnote-4-return">↩</a></p></li>
<li id="2019-06-21-democracy-footnote-5-definition" class="footnote-definition">
<p>See for instance <a href="https://www.businessinsider.com/boris-johnson-record-sexist-homophobic-and-racist-comments-bumboys-piccaninnies-2019-6">this</a>. <a href="#2019-06-21-democracy-footnote-5-return">↩</a></p></li>
<li id="2019-06-21-democracy-footnote-6-definition" class="footnote-definition">
<p>The poem was <em>The Road to Mandalay</em>: see <a href="https://www.theguardian.com/politics/2017/sep/30/boris-johnson-caught-on-camera-reciting-kipling-in-myanmar-temple">this</a>. <a href="#2019-06-21-democracy-footnote-6-return">↩</a></p></li>
<li id="2019-06-21-democracy-footnote-7-definition" class="footnote-definition">
<p>I think the origin of this quote is an interview in the ‘PM’ programme on BBC Radio 4, although I haven’t been able to track it down. it is cited in <a href="https://www.theguardian.com/commentisfree/2019/jun/23/the-guardian-view-on-boris-johnson-a-question-of-character">this Guardian editorial</a>. <a href="#2019-06-21-democracy-footnote-7-return">↩</a></p></li>
<li id="2019-06-21-democracy-footnote-8-definition" class="footnote-definition">
<p>See <a href="https://theconversation.com/boris-johnson-supporters-want-no-deal-brexit-and-less-talk-of-climate-change-new-survey-of-party-members-reveals-118633">this</a> and also a reference in <em>The Economist</em> article above. <a href="#2019-06-21-democracy-footnote-8-return">↩</a></p></li></ol></div>Function calling conventions and bindingsurn:https-www-tfeb-org:-fragments-2019-01-04-function-calling-conventions-and-bindings2019-01-04T10:19:36Z2019-01-04T10:19:36ZTim Bradshaw
<p>An attempt to describe three well-known function calling conventions in terms of bindings.</p>
<!-- more-->
<p>A little while ago I wrote an <a href="../../../../2018/12/11/call-by-value-in-scheme-and-lisp">article on bindings</a> which, in turn, was based on my answer to <a href="https://stackoverflow.com/questions/53694761/pass-by-value-confusion-in-scheme">this Stack Overflow question</a>. I have since written another answer to <a href="https://stackoverflow.com/questions/54018077/in-common-lisp-when-are-objects-referenced-and-when-are-they-directly-accessed">a more recent question</a> and I thought it would be worth summarising part of that to describe how three famous function calling conventions can be described in terms of bindings<sup><a href="#2019-01-04-function-calling-conventions-and-bindings-footnote-1-definition" name="2019-01-04-function-calling-conventions-and-bindings-footnote-1-return">1</a></sup>.</p>
<h2 id="bindings-in-brief">Bindings in brief</h2>
<p>A <em>binding</em> is an association between a name (a variable) and a value, where the value can be any object the language can talk about. In most Lisps (and other languages) bindings are not first-class: the language can not talk about bindings directly, and in particular bindings can not be values. Bindings are, or may be, <em>mutable</em>: their values (but not their names) can be changed by assignment. Many bindings can share the same value. Bindings have scope (where they are accessible) and extent (how long they are accessible for) and there are rules about that.</p>
<h2 id="call-by-value">Call by value</h2>
<p>In call by value the <em>value</em> of a binding is passed to a procedure. This means that the procedure can not mutate the binding itself. If the value is a mutable object it can be altered by the procedure, but the binding can not be.</p>
<p>Call by value is the convention used by all Lisps I know of. Here is a function which demonstrates that call by value can not mutate bindings:</p>
<pre><code>(defun pbv (&optional (fn #'identity))
;; If FN returns then the first value of this function will be T
(let ((c (cons 0 0))) ;first binding
(let ((cc c)) ;second binding, shares value with first
(funcall fn c) ;FN gets the *value* of C
(values (eq c cc) c)))) ;C and CC still refer to the same object</code></pre>
<h2 id="call-by-reference">Call by reference</h2>
<p>In call by reference, procedures get <em>the bindings themselves</em> as arguments. If a procedure modifies the binding by assignment, then it is modified in the calling procedure as well.</p>
<p>Lisp does not use call by reference: Fortran does, or can, use a calling mechanism which is equivalent to call by reference<sup><a href="#2019-01-04-function-calling-conventions-and-bindings-footnote-2-definition" name="2019-01-04-function-calling-conventions-and-bindings-footnote-2-return">2</a></sup>.</p>
<p>It is possible to implement what is essentially call by reference in Lisp (here Common Lisp, but any Lisp with lexical scope, indefinite extent & macros can do this) using some macrology:</p>
<pre><code>(defmacro capture-binding (var)
;; Construct an object which captures a binding
`(lambda (&optional (new-val nil new-val-p))
(when new-val-p
(setf ,var new-val))
,var))
(declaim (inline captured-binding-value
(setf captured-binding-value)))
(defun captured-binding-value (cb)
;; value of a captured binding
(funcall cb))
(defun (setf captured-binding-value) (new cb)
;; change the value of a captured binding
(funcall cb new))</code></pre>
<p>And now, given</p>
<pre><code>(defun mutate-binding (b v)
(setf (captured-binding-value b) v))
(defun sort-of-call-by-reference ()
(let ((c (cons 1 1)))
(let ((cc c))
(mutate-binding (capture-binding cc) 3)
(values c cc))))
> (sort-of-call-by-reference)
(1 . 1)
3</code></pre>
<p>The trick here is that the procedure created by the <code>capture-binding</code> macro has access to the binding being captured, and can mutate it.</p>
<h2 id="call-by-name">Call by name</h2>
<p>Call by name is the same as call by value, except the value of a binding is only computed at the point it is needed. Call by name is a form of delayed evaluation or normal-order evaluation strategy.</p>
<p>Lisp (at least Common Lisp: Lisps which have normal-order evaluation strategies exist) does not have call by name, but again it can be emulated with some macrology:</p>
<pre><code>(defmacro delay (form)
;; simple-minded DELAY. FORM is assumed to return a single value,
;; and will be evaluated no more than once.
(let ((fpn (make-symbol "FORCEDP"))
(vn (make-symbol "VALUE")))
`(let ((,fpn nil) ,vn)
(lambda ()
(unless ,fpn
(setf ,fpn t
,vn ,form))
,vn))))
(declaim (inline force))
(defun force (thunk)
;; forcd a thunk
(funcall thunk))
(defmacro funcall/delayed (fn &rest args)
;; call a function with a bunch of delayed arguments
`(funcall ,fn ,@(mapcar (lambda (a)
`(delay ,a))
args)))</code></pre>
<p>And now</p>
<pre><code>(defun return-first-thunk-value (t1 t2)
(declare (ignorable t2))
(force t1))
(defun surprisingly-quick ()
(funcall/delayed #'return-first-thunk-value
(cons 1 2)
(loop repeat 1000000
collect
(loop repeat 1000000
collect
(loop repeat 1000000
collect 1)))))
> (time (surprisingly-quick))
Timing the evaluation of (surprisingly-quick)
User time = 0.000
System time = 0.000
Elapsed time = 0.001
Allocation = 224 bytes
3 Page faults
(1 . 2)</code></pre>
<p>The second argument to <code>return-first-thunk-value</code> was never forced, and so the function completes in reasonable time.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2019-01-04-function-calling-conventions-and-bindings-footnote-1-definition" class="footnote-definition">
<p>This, in turn, is distantly descended from <a href="https://www.xach.com/naggum/articles/3229347076995853@naggum.net.html">a post on <code>comp.lang.lisp</code> by Erik Naggum</a>. <a href="#2019-01-04-function-calling-conventions-and-bindings-footnote-1-return">↩</a></p></li>
<li id="2019-01-04-function-calling-conventions-and-bindings-footnote-2-definition" class="footnote-definition">
<p>I think Fortran is allowed to implement its ‘by reference’ calls by copying any modified bindings back to the bindings in the parent procedure, and this is largely equivalent, at least for single-threaded code. <a href="#2019-01-04-function-calling-conventions-and-bindings-footnote-2-return">↩</a></p></li></ol></div>Call by value in Scheme and Lispurn:https-www-tfeb-org:-fragments-2018-12-11-call-by-value-in-scheme-and-lisp2018-12-11T10:50:28Z2018-12-11T10:50:28ZTim Bradshaw
<p>I find the best way to think about this is to think in terms of <em>bindings</em>, rather than environments or frames, which are simply containers for bindings.</p>
<!-- more-->
<h2 id="bindings">Bindings</h2>
<p>A binding is an association between a <em>name</em> and a <em>value</em>. The name is often called a ‘variable’ and the value is, well, the value of the variable. The value of a binding can be any object that the language can talk about at all. Bindings, however, are behind-the-scenes things (sometimes this is called ‘not being first-class objects’): they’re not things that can be represented in the language but rather things that you can use as part of the model of how the language works. So <em>the value of a binding can’t be a binding</em>, because bindings are not first-class: the language can’t talk about bindings.</p>
<p>There are some rules about bindings:</p>
<ul>
<li>there are forms which create them, of which the most important two are <code>lambda</code> and <code>define</code>;</li>
<li>bindings are not first-class — the language can not represent bindings as values;</li>
<li>bindings are, or may be, <em>mutable</em> — you can change the value of a binding once it exists — and the form that does this is <code>set!</code>;</li>
<li>there is no operator which destroys a binding;</li>
<li>bindings have <em>lexical scope</em> — the bindings available to a bit of code are the ones you can see by looking at it, not ones you have to guess by running the code and which may depend on the dynamic state of the system;</li>
<li>only one binding for a given name is ever accessible from a given bit of code — if more than one is lexically visible then the innermost one shadows any outer ones;</li>
<li>bindings have <em>indefinite extent</em> — if a binding is ever available to a bit of code, it is always available to it.</li></ul>
<p>Obviously these rules need to be elaborated significantly (especially with regards to global bindings & forward-referenced bindings) and mare formal, but these are enough to understand what happens. In particular I don’t really think you need to spend a lot of time worrying about environments: the environment of a bit of code is just the set of bindings accessible to it, so rather than worry about the environment just worry about the bindings.</p>
<h2 id="call-by-value">Call by value</h2>
<p>So, what ‘call by value’ means is that when you call a procedure with an argument which is a variable (a binding) what is passed to it is the <em>value</em> of the variable binding, not the binding itself. The procedure then creates a <em>new</em> binding with the same value. Two things follow from that:</p>
<ul>
<li>the original binding can not be altered by the procedure — this follows because the procedure only has the value of it, not the binding itself, and bindings are not first-class so you can’t cheat by passing the binding itself as the value;</li>
<li>if the value is itself a mutable object (arrays & conses are example of objects which usually are mutable, numbers are examples of objects which are not) then the procedure can mutate that object.</li></ul>
<h2 id="examples-of-the-rules-about-bindings">Examples of the rules about bindings</h2>
<p>So, here are some examples of these rules.</p>
<pre><code>(define (silly x)
(set! x (+ x 1))
x)
(define (call-something fn val)
(fn val)
val))
> (call-something silly 10)
10</code></pre>
<p>So, here we are creating two top-level bindings, for <code>silly</code> and <code>call-something</code>, both of which have values which are procedures. The value of <code>silly</code> is a procedure which, when called:</p>
<ol>
<li>creates a new binding whose name is <code>x</code> and whose value is the argument to <code>silly</code>;</li>
<li>mutates this binding so its value is incremented by one;</li>
<li>returns the value of this binding, which is one more than the value it was called with.</li></ol>
<p>The value of <code>call-something</code> is a procedure which, when called:</p>
<ol>
<li>creates two bindings, one named <code>fn</code> and one named <code>val</code>;</li>
<li>calls the value of the <code>fn</code> binding with the value of the <code>val</code> binding;</li>
<li>returns the value of the <code>val</code> binding.</li></ol>
<p>Note that <em>whatever</em> the call to <code>fn</code> does, it can not mutate the binding of <code>val</code>, because it has no access to it. So what you can <em>know</em>, by looking at the definition of <code>call-something</code> is that, if it returns at all (it may not return if the call to <code>fn</code> does not return), it will return the value of its second argument. This guarantee is what ‘call by value’ means: a language (such as Fortran) which supports other call mechanisms can’t always promise this.</p>
<pre><code>(define (outer x)
(define (inner x)
(+ x 1))
(inner (+ x 1)))</code></pre>
<p>Here there are four bindings: <code>outer</code> is a top-level binding whose value is a procedure which, when it is called, creates a binding for <code>x</code> whose value is its argument. It then creates another binding called <code>inner</code> whose value is another procedure, which, when it is called, creates a <em>new</em> binding for <code>x</code> to <em>its</em> argument, and then returns the value of that binding plus one. <code>outer</code> then calls this inner procedure with the value of its binding for <code>x</code>.</p>
<p>The important thing here is that, in <code>inner</code>, there are two bindings for <code>x</code> which are potentially lexically visible, but the closest one — the one established by <code>inner</code> — wins, because only one binding for a given name can ever be accessible at one time.</p>
<p>Here is the previous code (this would not be equivalent if <code>inner</code> was recursive) expressed with explicit <code>lambda</code>s:</p>
<pre><code>(define outer
(λ (x)
((λ (inner)
(inner (+ x 1)))
(λ (x)
(+ x 1)))))</code></pre>
<p>And finally an example of mutating bindings:</p>
<pre><code>(define (make-counter val)
(λ ()
(let ((current val))
(set! val (+ val 1))
current)))
> (define counter (make-counter 0))
> (counter)
0
> (counter)
1
> (counter)
2</code></pre>
<p>So, here, <code>make-counter</code> (is the name of a binding whose value is a procedure which, when called,) establishes a new binding for <code>val</code> and then returns a procedure it has created. This procedure makes a new binding called <code>current</code> which catches the current value of <code>val</code>, <em>mutates</em> the binding for <code>val</code> to add one to it, and returns the value of <code>current</code>. This code exercises the ‘if you can ever see a binding, you can always see it’ rule: the binding for <code>val</code> created by the call to <code>make-counter</code> is visible to the procedure it returns for as long as that procedure exists (and that procedure exists at least as long as there is a binding for it), and it also mutates a binding with <code>set!</code>.</p>
<h2 id="why-not-environments">Why not environments?</h2>
<p><a href="https://mitpress.mit.edu/sites/default/files/sicp/index.html">SICP</a>, in <a href="https://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-19.html#%_chap_3">chapter 3</a>, introduces the ‘environment model’, where at any point there is an environment, consisting of a sequence of frames, each frame containing bindings. Obviously this is a fine model, but it introduces three kinds of thing — the enviromnent, the frames in the environment and the bindings in the frame — two of which are utterly intangible. At least for a binding you can get hold of it in some way: you can see it being created in the code and you can see references to it. So I prefer not to think in terms of these two extra sorts of thing which you can never get any kind of handle on.</p>
<p>However this is a choice which makes no difference in practice: thinking purely in terms of bindings helps me, thinking in terms of environments, frames & bindings may well help other people more.</p>
<h2 id="shorthands">Shorthands</h2>
<p>In what follows I am going to use a shorthand for talking about bindings, especially top-level ones:</p>
<ul>
<li>’<code>x</code> is a procedure which …’ means ’<code>x</code> is the name of a binding whose value is a procedure which, when called, …’;</li>
<li>’<code>y</code> is …’ means ’<code>y</code> is the name of a binding the value of which is …’;</li>
<li>’<code>x</code> is called with <code>y</code>’ means ‘the value of the binding named by <code>x</code> is called with the value of the binding named by <code>y</code>’;</li>
<li>’… binds <code>x</code> to …’ means ’… creates a binding whose name is <code>x</code> and whose value is …’;</li>
<li>’<code>x</code>’ means ‘the value of <code>x</code>’;</li>
<li>and so on.</li></ul>
<p>Describing bindings like this is common, as the fully-explicit way is just painful: I’ve tried (but probably failed in places) to be fully explicit above.</p>
<h2 id="the-answer">The answer</h2>
<p>And finally, after this long preamble, here’s the answer to the question you asked<sup><a href="#2018-12-11-call-by-value-in-scheme-and-lisp-footnote-1-definition" name="2018-12-11-call-by-value-in-scheme-and-lisp-footnote-1-return">1</a></sup>.</p>
<pre><code>(define (make-withdraw balance)
(λ (amount)
(if (>= balance amount)
(begin (set! balance (- balance amount))
balance)
"Insufficient funds")))</code></pre>
<p><code>make-withdraw</code> binds <code>balance</code> to its argument and returns a procedure it makes. This procedure, when called:</p>
<ol>
<li>binds <code>amount</code> to its argument;</li>
<li>compares <code>amount</code> with <code>balance</code> (which it can still see because it could see it when it was created);</li>
<li>if there’s enough money then it mutates the <code>balance</code> binding, decrementing its value by the value of the <code>amount</code> binding, and returns the new value;</li>
<li>if there’s not enough money it returns <code>"Insuficient funds"</code> (but does <em>not</em> mutate the <code>balance</code> binding, so you can try again with a smaller amount: a real bank would probably suck some money out of the <code>balance</code> binding at this point as a fine).</li></ol>
<p>Now</p>
<pre><code>(define x (make-withdraw 100))</code></pre>
<p>creates a binding for <code>x</code> whose value is one of the procedures described above: in that procedure <code>balance</code> is initially <code>100</code>.</p>
<pre><code>(define (f y) (y 25))</code></pre>
<p><code>f</code> is a procedure (is the name of a binding whose value is a procedure, which, when called) which binds <code>y</code> to its argument and then calls it with an argument of <code>25</code>.</p>
<pre><code>(f x)</code></pre>
<p>So, <code>f</code> is called with <code>x</code>, <code>x</code> being (bound to) the procedure constructed above. In <code>f</code>, <code>y</code> is bound to this procedure (not to a copy of it, to it), and this procedure is then called with an argument of <code>25</code>. This procedure then behaves as described above, and the results are as follows:</p>
<pre><code>> (f x)
75
> (f x)
50
> (f x)
25
> (f x)
0
> (f x)
"Insufficient funds"</code></pre>
<p>Note that:</p>
<ul>
<li>no first-class objects are copied anywhere in this process: there is no ‘copy’ of a procedure created;</li>
<li>no first-class objects are mutated anywhere in this process;</li>
<li>bindings are created (and later become inacessible and so can be destroyed) in this process;</li>
<li>one binding is mutated repeatedly in this process (once for each call);</li>
<li>I have not anywhere needed to mention ‘environments’, which are just the set of bindings visible from a certain point in the code and I think not a very useful concept.</li></ul>
<p>I hope this makes some kind of sense.</p>
<hr />
<h2 id="a-more-elaborate-version-of-the-above-code">A more elaborate version of the above code</h2>
<p>Something you might want to be able to do is to back out a transaction on your account. One way to do that is to return, as well as the new balance, a procedure which undoes the last transaction. Here is a procedure which does that (this code is in <a href="http://racket-lang.org/">Racket</a>):</p>
<pre><code>(define (make-withdraw/backout
balance
(insufficient-funds "Insufficient funds"))
(λ (amount)
(if (>= balance amount)
(let ((last-balance balance))
(set! balance (- balance amount))
(values balance
(λ ()
(set! balance last-balance)
balance)))
(values
insufficient-funds
(λ () balance)))))</code></pre>
<p>When you make an account with this procedure, then calling it returns two values: the first is the new balance, or the value of <code>insufficient-funds</code> (defaultly <code>"Insufficient funds"</code>), the second is a procedure which will undo the transaction you just did. Note that it undoes it by explicitly putting back the old balance, because you can’t necessarily rely on <code>(= (- (+ x y) y) x)</code> being true in the presence of floating-point arithmetic I think. If you understand how this works then you probably understand bindings.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2018-12-11-call-by-value-in-scheme-and-lisp-footnote-1-definition" class="footnote-definition">
<p>This originated as an answer to <a href="https://stackoverflow.com/questions/53694761/pass-by-value-confusion-in-scheme">this Stack Overflow question</a>. <a href="#2018-12-11-call-by-value-in-scheme-and-lisp-footnote-1-return">↩</a></p></li></ol></div>Worse is betterurn:https-www-tfeb-org:-fragments-2018-11-28-worse-is-better2018-11-28T12:46:50Z2018-11-28T12:46:50ZTim Bradshaw
<p>In 1990, <a href="https://www.dreamsongs.com/WorseIsBetter.html">Richard Gabriel gave a talk</a> from which Jamie Zawinski later extracted a section called <a href="https://www.jwz.org/doc/worse-is-better.html">‘worse is better’</a> which he distributed widely. It’s strange but, perhaps, interesting, how prescient this idea was.</p>
<!-- more-->
<p>The paper describes two approaches to design<sup><a href="#2018-11-28-worse-is-better-footnote-1-definition" name="2018-11-28-worse-is-better-footnote-1-return">1</a></sup>.</p>
<h2 id="the-right-thing">The Right Thing</h2>
<ul>
<li>Designs must be <strong>simple</strong>, both in implementation and interface. It is more important for the interface to be simple than the implementation.</li>
<li>Designs must be <strong>correct</strong> in all observable aspects. Incorrectness is simply not allowed.</li>
<li>Designs must be <strong>consistent</strong>. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness.</li>
<li>Designs must be <strong>complete</strong> and cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness.</li></ul>
<h2 id="worse-is-better">Worse Is Better</h2>
<ul>
<li>Designs must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.</li>
<li>Designs must be correct in all observable aspects. It is slightly better to be simple than correct.</li>
<li>Designs must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency.</li>
<li>Designs must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must be sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface.</li></ul>
<h2 id="today">Today</h2>
<p>Today I felt it necessary to complain about a particularly stupid bit of behaviour in a filesystem, & I wrote, without conscious thought</p>
<blockquote>
<p>[…] that something like this is even possible in 2018 means that, really, the sort of computing environment which seemed like it would happen in 1980 and still seemed possible into the late 1990s is just dead: worse is not just better, worse has taken better, killed it, buried it in a pit and erased any memory that it ever existed.</p></blockquote>
<p>Of course no-one is listening (none of the people I sent this to even would have recognised the term I expect) just as no-one, or no-one who counted, listened to the original paper. But everything that is making modern computing systems so horrible — all the hardware bugs, all the systemic insecurity that is going to cost us very dearly if it hasn’t already, all of it — is because no-one listened and worse won by default as a result.</p>
<p>Today few people even remember that there was once an option to do things a better way. Soon, no-one will.</p>
<p>Oh, well.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2018-11-28-worse-is-better-footnote-1-definition" class="footnote-definition">
<p>These descriptions are stolen almost directly from <a href="https://www.dreamsongs.com/RiseOfWorseIsBetter.html">the original</a>: any errors I have introduced by rewording things are my own. <a href="#2018-11-28-worse-is-better-footnote-1-return">↩</a></p></li></ol></div>After Charlottesvilleurn:https-www-tfeb-org:-fragments-2018-08-31-after-charlottesville2018-08-31T18:23:04Z2018-08-31T18:23:04ZTim Bradshaw
<p><a href="../../../../2017/06/13/no-excuses">In June 2017</a> I argued that people who voted for Trump were racists: I’m very unhappy with that conclusion.</p>
<!-- more-->
<p>But a year after <a href="https://en.wikipedia.org/wiki/Unite_the_Right_rally">the events at Charlottesville on the 11th and 12th of August 2017</a>, and Trump’s response to them, it’s obvious that I was right: a very significant proportion of Americans are certainly racists. And the same is, surely, true for other countries: many of the people I see every day must be racists.</p>
<h2 id="charlottesville">Charlottesville</h2>
<p>The march as Charlottesville was not just some rather right-wing people wanting to preserve the symbols of their romanticised past without thinking too hard about what that past actually involved: it was people waving flags with swastikas on them and chanting ‘Jews will not replace us’. It was, in other words, Nazis<sup><a href="#2018-08-31-after-charlottesville-footnote-1-definition" name="2018-08-31-after-charlottesville-footnote-1-return">1</a></sup>. Of course, not everyone on the march was doing this, but here’s the thing: if you find yourself on the same march as people waving swastikas and chanting antisemitic slogans, <em>you stop marching</em>. If you don’t stop marching you are, in fact, supporting them. You don’t get to say that, well, you march alongside Nazis but you’re not actually a Nazi supporter: and if you support Nazis, you’re a Nazi.</p>
<p>And these Nazis aren’t fooling around: one of them <a href="https://en.wikipedia.org/wiki/Unite_the_Right_rally#Heather_Heyer">drove a car into the crowd</a>, <a href="https://en.wikipedia.org/wiki/Unite_the_Right_rally#Heather_Heyer">killing Heather Heyer</a>. That’s murder, and an act of terrorism. And there were other attacks associated with the march, including on <a href="https://en.wikipedia.org/wiki/Assault_of_DeAndre_Harris">DeAndre Harris</a>, who was beaten with a metal pipe and wooden boards.</p>
<h2 id="trumps-response">Trump’s response</h2>
<p>We all know what this was. Initially, on the 12th of August he made a statement saying:</p>
<blockquote>
<p>We condemn in the strongest possible terms this egregious display of hatred, bigotry and violence on many sides<sup><a href="#2018-08-31-after-charlottesville-footnote-2-definition" name="2018-08-31-after-charlottesville-footnote-2-return">2</a></sup>.</p></blockquote>
<p>He repeated ‘on many sides’ twice.</p>
<p>On the 14th of August he gave another statement saying:</p>
<blockquote>
<p>Racism is evil. And those who cause violence in its name are criminals and thugs, including KKK, Neo-Nazis, White Supremacists, and other hate groups are repugnant to everything we hold dear as Americans. Those who spread violence in the name of bigotry strike at the very core of America<sup><a href="#2018-08-31-after-charlottesville-footnote-3-definition" name="2018-08-31-after-charlottesville-footnote-3-return">3</a></sup>.</p></blockquote>
<p>It seems to be pretty clear that he made this statement as a result of strong pressure from people in the administration after the reaction to his first statement, and that it was written for him.</p>
<p>Then, on the 15th of August <a href="https://www.bbc.com/news/world-us-canada-40943425">he gave in</a> and <a href="https://www.nytimes.com/2017/08/15/us/politics/trump-press-conference-charlottesville.html">said what he really thought</a>:</p>
<blockquote>
<p>I think there is blame on both sides. You had a group on one side that was bad and you had a group on the other side that was also very violent. And nobody wants to say that, but I’ll say it right now. […] I’ve condemned neo-Nazis. I’ve condemned many different groups, but not all of those people were neo-Nazis, believe me. Not all of those people were white supremacists, by any stretch. […] You have some very bad people in that group, but you also had people that were very fine people on both sides<sup><a href="#2018-08-31-after-charlottesville-footnote-4-definition" name="2018-08-31-after-charlottesville-footnote-4-return">4</a></sup>.</p></blockquote>
<p>So we know what he thinks: he thinks that both sides are pretty much the same. Nazis and those who march with them are pretty much equivalent to those who fight them, and in particlar among those marching with the Nazis there were ‘fine people’. Nazis are people who are not just racists: they don’t just think some groups of people are inherently superior to others, they advocate that the inferior groups should be gassed, <em>and they have in the past done just that to millions of people</em>. If you’re a Nazi or you are marching with Nazis, <em>you are not ‘fine people’</em>: you’re a deeply horrible human being.</p>
<p>So, OK, that’s pretty clear, right? Trump was offering support to racists, and in fact to Nazis. You really can’t miss that, and I’m sure no-one did.</p>
<h2 id="where-this-goes">Where this goes</h2>
<p>So, obviously, if Trump’s supporters were not racists his support would have collapsed in the days & weeks after Charlottesville: people who had missed his racism in the election campaign weren’t going to miss this, because it’s not possible to miss it. Well, of course his support has not collapsed<sup><a href="#2018-08-31-after-charlottesville-footnote-5-definition" name="2018-08-31-after-charlottesville-footnote-5-return">5</a></sup>, and so there is only one conclusion: an awful lot of people, including the elected politicians who still support him, are racists.</p>
<blockquote>
<p>If you’re not outraged, you’re not paying attention.</p></blockquote>
<p>— Heather Heyer’s last post on Facebook</p>
<hr />
<div class="footnotes">
<ol>
<li id="2018-08-31-after-charlottesville-footnote-1-definition" class="footnote-definition">
<p>Perhaps they were, technically, <em>neo</em>-Nazis: there is no useful distinction. <a href="#2018-08-31-after-charlottesville-footnote-1-return">↩</a></p></li>
<li id="2018-08-31-after-charlottesville-footnote-2-definition" class="footnote-definition">
<p>Source: <a href="https://edition.cnn.com/2017/08/14/politics/charlottesville-nazi-trump-statement-trnd/index.html">this CNN article</a>. <a href="#2018-08-31-after-charlottesville-footnote-2-return">↩</a></p></li>
<li id="2018-08-31-after-charlottesville-footnote-3-definition" class="footnote-definition">
<p>Source: as previous quote. <a href="#2018-08-31-after-charlottesville-footnote-3-return">↩</a></p></li>
<li id="2018-08-31-after-charlottesville-footnote-4-definition" class="footnote-definition">
<p>Source: <a href="https://www.cnbc.com/2017/08/15/read-the-transcript-of-donald-trumps-jaw-dropping-press-conference.html">this transcript</a>. <a href="#2018-08-31-after-charlottesville-footnote-4-return">↩</a></p></li>
<li id="2018-08-31-after-charlottesville-footnote-5-definition" class="footnote-definition">
<p>At the time of writing his approval rating to be about 40%, based on <a href="https://news.gallup.com/poll/203198/presidential-approval-ratings-donald-trump.aspx">this</a>. <a href="#2018-08-31-after-charlottesville-footnote-5-return">↩</a></p></li></ol></div>Vellumurn:https-www-tfeb-org:-fragments-2017-06-22-vellum2017-06-22T14:58:37Z2017-06-22T14:58:37ZTim Bradshaw
<p>The <a href="http://www.bbc.co.uk/news/magazine-35569281">UK keeps its laws on vellum</a>: this seems to be a ludicrously archaic thing to do: is it?</p>
<!-- more-->
<h2 id="dont-preserve-physical-artifacts-preserve-information">Don’t preserve physical artifacts: preserve information</h2>
<p>People who deal with archives are used to dealing with physical objects and worrying about their longevity. So they worry about how long paper vellum last, what their decay mechanisms are and how they can be minimised. Everything is kept in controlled conditions so that the physical objects last as long as they can. Thus it is tempting to think that preserving information is the same thing as preserving the physical objects in which it resides: to preserve digital information you must preserve the media — tape, disks and so on — on which it resides. But we know that these media have rather short lifetimes — perhaps a few tens of years at the outside — and even when the media survive, there may be no way of reading them since the infrastructure on which they relied has gone.</p>
<p>This is, of course, confused: to preserve information you do not need to preserve the media on which it resides for any length of time. Since digital information can be copied without loss (or with a very low chance of loss), what you do instead is repeatedly copy the information onto current media. Preserving information is not the same as preserving physical artifacts: rather than a sacred disk rotting in a vault you keep the data spinning all the time on many copies of current media. I have files which originated on Fujitsu Eagles: I doubt there are very many Eagles still spinning or machines which can use them, but the information isn’t in any danger of being lost.</p>
<h2 id="dont-preserve-information-preserve-physical-artifacts">Don’t preserve information: preserve physical artifacts</h2>
<p>Everything above is wrong, because it makes a critical assumption which is not true.</p>
<blockquote>
<p>You can always keep information on current media.</p></blockquote>
<p>This is true only if you are continually working on the system: in order to keep information spinning you need to be willing to buy new systems, transfer the information to the new systems, and keep the power on. But there is no evidence that we can keep the power on for any length of time, and plenty of evidence that we can’t.</p>
<p>This isn’t just dealing with a possible collapse of advanced civilisation, although archivists should worry about that: it’s happened before, and there is no reason to believe it won’t happen again. If we go through a period of several hundred years where our society retreats to some preindustrial (or just pre–1970) level, how much of our digitally-stored information will survive? My guess is that almost none will. And such a collapse is likely.</p>
<p>But much less than that is needed for information to be lost. Consider some large scientific data set — climate data for instance. What happens if political power gets into the hands of people for whom that data is inconvenient, and who remove funding from the organisations which look after that data? It may persist for a while, on ageing disk arrays and tapes, until enough of the redundancy goes away; it may persist for a while even after the power is removed from the systems which hold it. But it will not persist when the rent isn’t paid on the buildings in which those systems live. Within quite a short time that information will be irretrievably lost.</p>
<p><em>The archivists turn out to be right</em>: if you want to preserve information it needs to live on media which remain readable for long periods of time with minimal requirements. In particular there must be no requirement for frequent replacement of hardware, on human intervention, or power. Choosing a medium, samples of which which <em>have already survived for long periods</em> is a good idea as well. Vellum is not such a bad choice if you only need to preserve a small amount of information. Large scientific data sets present a different problem, but ‘just keep the data spinning’ is probably not a very good solution.</p>No excusesurn:https-www-tfeb-org:-fragments-2017-06-13-no-excuses2017-06-13T15:22:40Z2017-06-13T15:22:40ZTim Bradshaw
<p>As card-carrying members of the liberal elite we have to understand why so many people are so cross. Obviously it is our fault: with our awful progressive views we have prospered at their expense and it is only natural that they should express their anger by supporting politicians who are explicitly racist and misogynistic. That’s just a natural reaction: the people we have oppressed so horribly aren’t actually racists and misogynists, no, they just support politicians who are. It’s all our fault<sup><a href="#2017-06-13-no-excuses-footnote-1-definition" name="2017-06-13-no-excuses-footnote-1-return">1</a></sup>.</p>
<!-- more-->
<p><em>Wait, what?</em></p>
<p>There are four claims here:</p>
<ul>
<li>a lot of people are aggrieved by some possibly-invented ‘liberal elite’ who have somehow cheated them;</li>
<li>they then vote for bigots;</li>
<li>while not themselves being bigots;</li>
<li>and they do this because they are so cross.</li></ul>
<p>Let’s leave the first point aside: there is clearly something to it, although the elites who are really prospering at others’ expense are anything but liberal<sup><a href="#2017-06-13-no-excuses-footnote-2-definition" name="2017-06-13-no-excuses-footnote-2-return">2</a></sup>.</p>
<h2 id="do-people-vote-for-bigots">Do people vote for bigots?</h2>
<p>Let’s take Donald Trump: lots of people voted for him and he is explicitly a bigot. He has famously boasted about attacking women, his firm <a href="https://www.nytimes.com/2016/07/24/opinion/sunday/is-donald-trump-a-racist.html">has been sued</a> for systematically discriminating against black people, he has repeatedly tried to pass legislation which discriminates on grounds of religion and so on. He’s also surrounded himself with white supremacists. There’s no serious argument here: he’s at least racist, misogynistic & clearly prejudiced against muslims and quite possibly antisemitic.</p>
<p>So yes, people vote for bigots.</p>
<h2 id="are-they-then-themselves-bigots">Are they then themselves bigots?</h2>
<p>Here’s the thing: <em>if you vote for a someone you know to be a racist then you are a racist</em>. You don’t need to go out and lynch black people yourself to be a racist: it is sufficient to elect people who will do it for you.</p>
<p>The argument that this is not true seems to be, essentially, that all the people who supported Trump were just too stupid to understand what it was they were supporting: their understanding was so weak that they just didn’t realise how unpleasant his views were. This is insulting the intelligence of millions of people, and I don’t believe it: I think they knew exactly what they were voting for and I think that what they voted for was what they wanted.</p>
<p>A second argument is that, yes, they knew he was a racist and a misogynist but that this mattered less to them than other things he represented. A similar argument gets made about parties like the <a href="https://en.wikipedia.org/wiki/Democratic_Unionist_Party">DUP</a>: people supposedly vote for them not because they support their hateful views on homosexuality but because they’re willing to put up with them because of the other things the party represents. This argument is rubbish: if people actually wanted politicians who would do the other things that Trump or the DUP represent while not being bigots, then such politicians would displace Trump or the DUP. That the DUP still exists, and that Trump beat other candidates to become the Republican candidate tells you all you need to know.</p>
<p>Again, yes: almost all of the people who voted for Trump are bigots.</p>
<h2 id="does-being-cross-make-you-a-bigot">Does being cross make you a bigot?</h2>
<p>The final claim is that, somehow, people are not <em>normally</em> bigots, but they become so when they are sufficiently angry. This seems to be something that often happens: everyone knows stories about people who, in moments of stress, intoxication or anger, <a href="http://www.bbc.co.uk/news/entertainment-arts-22782905">shout antisemitic & racist abuse</a>. But this, apparently, isn’t because they actually are antisemites and racists: it’s just something that happens when you’re drunk, or stoned, or cross.</p>
<p>Bullshit. You don’t suddenly become a racist because you are angry: you are already a racist although you dare not say so. When you get sufficiently angry you lose self-control and <em>say what you already think</em>.</p>
<p>So, no, being cross does not make you a bigot: you already are, you’re just a coward as well.</p>
<h2 id="no-excuses">No excuses</h2>
<p>Trump is a racist and a misogynist, and if you voted for him then <em>you are a racist and a misogynist as well</em>, and there is no excuse for this: you didn’t suddenly become one because you were cross, or because some phantom liberal elite made you one, you <em>already were one</em>.</p>
<p>Perhaps this, finally, is something for which the liberal elite really are to blame. It seemed as if racism, misogynism and all the other bigotries were finally being overcome, when in fact they were not. A large number of people still thought like this: they were just not <em>saying</em> what they thought. Until now.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2017-06-13-no-excuses-footnote-1-definition" class="footnote-definition">
<p>‘The Western intelligentsia, snug in its echo-chamber, has done a dismal job of understanding what is going on, either dismissing populists as cranks or demonising them as racists.’ — <a href="http://www.economist.com/news/books-and-arts/21711024-john-judis-has-written-powerful-account-forces-shaking-europe-and-america">The Economist</a>. <a href="#2017-06-13-no-excuses-footnote-1-return">↩</a></p></li>
<li id="2017-06-13-no-excuses-footnote-2-definition" class="footnote-definition">
<p>Trump doesn’t even <em>pretend</em> not to be enriching himself and his family at his country’s expense: if you are looking for an elite who is prospering at the expense of ‘ordinary people’ you don’t have to look further than that. <a href="#2017-06-13-no-excuses-footnote-2-return">↩</a></p></li></ol></div>Surveillance & magicurn:https-www-tfeb-org:-fragments-2017-03-07-surveillance-magic2017-03-07T11:58:36Z2017-03-07T11:58:36ZTim Bradshaw
<p><a href="https://en.wikipedia.org/wiki/Clarke's_three_laws">Clarke’s third law</a> is that</p>
<blockquote>
<p>any sufficiently advanced technology is indistinguishable from magic.</p></blockquote>
<p>It does not apply to organisations who want to intercept communications: if it’s claimed that they can do something which requires magic, then in fact they can’t do that.</p>
<!-- more-->
<p>Donald Trump apparently thinks, or at least pretends to think, that Obama was tapping his phone. <a href="https://www.theguardian.com/world/2017/mar/06/trumps-wiretap-paranoia-reality-modern-surveillance">This article in The Guardian</a>, among many others, points out how ridiculous these claims are. Unfortunately in doing so it perpetuates a common and stupid myth. In particular it contains this claim:</p>
<blockquote>
<p>The security agencies can access electronic devices across the planet with ease. They can listen in to a target’s mobile, even if it is switched off.</p></blockquote>
<p>Can they, in fact, do that? Well, yes, they probably can: you can only know if a modern mobile phone is really <em>off</em> if it has no power source, and since many mobile phones have batteries which can not be removed without destroying the device, you can never really know it is off without destroying it. So, once some suitable bit of software is in the phone it can be used like this.</p>
<p><em>Except that they can not use magic</em>. If a phone is listening to a conversation and transmitting it to someone, then it is using power to do so: it is, in fact, making a call. And modern phones are not famous for their long battery lives: <a href="https://www.apple.com/lae/iphone-7/specs/">Apple claim</a> ‘up to 14 hours’ talk time for the iPhone 7, and that claim will be an upper limit which applies to a phone which is brand new, has very good reception and so on. A more realistic estimate might be 7–10 hours, perhaps even less.</p>
<p>So, if someone is using your phone like this, you can tell if you are even slightly competent because its ‘standby’ time will be absolutely terrible. Even more bizarrely its battery life <em>when ‘off’</em> will be terrible.</p>
<p>So if you think people might be using your phone like this you should:</p>
<ul>
<li>turn it off;</li>
<li>do <em>not</em> keep it plugged in to the charger when it is off;</li>
<li>note how fast it runs down when in this state.</li></ul>
<p>Because even government agencies can not do magic.</p>
<hr />
<p>Of course the conspiracy theorists will claim that actually, mobile phones can have very long battery lives, but the technology is somehow being suppressed in order to make surveillance possible. Given that any company which makes use of this technology in its phones would make a huge amount of money, for this to be true all phone-making companies must be controlled by some shadowy Zionist world government. It used to be safe to deride people who believe this as the cranks they are. Unfortunately they seem to be winning: soon we will be being taught this stuff in schools, or at least those of us who have not gone to the gas chambers.</p>Dynamic scope and macrosurn:https-www-tfeb-org:-fragments-2017-01-26-dynamic-scope-and-macros2017-01-26T13:56:36Z2017-01-26T13:56:36ZTim Bradshaw
<p>I’ve recently been writing some <a href="https://en.wikipedia.org/wiki/Emacs_Lisp">Emacs Lisp</a> code to do some massaging of files. Quite apart from having forgotten how primitive elisp is, I hadn’t realised before how hostile dynamic scope was for macros in particular.</p>
<!-- more-->
<p>A very common pattern for macros is <code>call-with-*</code> / <code>with-*</code>, in which there is a functional level which is wrapped by a more syntacticlly-friendly macro level. For instance, in Common Lisp you can map over lists with <code>mapcar</code>:</p>
<pre><code>(mapcar
(lambda (e)
...)
...)</code></pre>
<p>but you might want to map over them with a syntax like</p>
<pre><code>(mapping (e ...)
...)</code></pre>
<p>Well, it’s easy to implement this:</p>
<pre><code>(defmacro mapping ((e l) &body forms)
`(mapcar (lambda (,e) ,@forms) ,l))</code></pre>
<p>Even with CL’s unhygienic macro system & without a mass of gensymmery such a macro is safe.</p>
<p>A good example where CL exposes one side of a pattern like this is <code>with-open-file</code>: you can easily see how to implement this in terms of a function:</p>
<pre><code>(defun call/open-file (fn filespec &rest keys
&key &allow-other-keys)
(let ((s nil))
(unwind-protect
(progn
(setf s (apply #'open filespec keys))
(funcall fn s))
(when s (close s)))))
(defmacro with-open-file* ((sn filespecn &rest keysn
&key &allow-other-keys)
&body forms)
`(call/open-file (lambda (,sn) ,@forms)
,filespecn ,@keysn))</code></pre>
<p>(This is probably not completely robust code: it’s just meant to get the idea across.)</p>
<p>Scheme exposes the other side of this pattern with <code>call/cc</code>:</p>
<pre><code>(define-syntax-rule (with-cc (c) form ...)
(call/cc (λ (c) form ...)))</code></pre>
<p>(<code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/misc..rkt)._define-syntax-rule))" style="color: inherit">define-syntax-rule</a></code> may be specific to Racket but, again, this is just meant to get the idea across.)</p>
<p>Well, now think about something like the above <code>call/open-file</code> / <code>with-open-file*</code> in a Lisp dialect with dynamic scope. In particular, what does this do:</p>
<pre><code>(let ((s t))
(with-open-file* (h ...)
(when s ...)))</code></pre>
<p>This expands to</p>
<pre><code>(let ((s t))
(call/open-file (lambda (h) (when s ...))))</code></pre>
<p>But <em><code>call/open-file</code> binds <code>s</code></em>: so the binding of <code>s</code> in the called function is <em>different</em> than the outer binding, and nothing works.</p>
<p>Well, of course, this is something that happens pervasively with dynamically-scoped languages: every binding above you (or below you, depending on your viewpoint) matters, and can infect your namespace. But it’s particularly toxic for macros, because macros very often interpose bits of code into your code, and that code can include bindings which are dynamically, but not lexically, visible, even in the expansion of the macro. Dynamic scope enormously increases the hygiene problems of a macro system.</p>
<p>Dynamic scope is really useful as an option, and systems written in languages which don’t have it generally have to reinvent it, usually badly. But it’s just toxic and horrible as the <em>only</em> option. I can’t understand any more how I managed to use lisps with dynamic scope at all: perhaps I never wrote macros or just expected things to behave in a mysterious and strange way occasionally. Fortunately, even elisp <a href="https://www.gnu.org/software/emacs/manual/html_node/elisp/Lexical-Binding.html#Lexical-Binding">now has the option of being lexically scoped</a>.</p>No futureurn:https-www-tfeb-org:-fragments-2016-10-14-no-future2016-10-14T06:54:37Z2016-10-14T06:54:37ZTim Bradshaw
<p>We’ve been fooling ourselves for thirty years. We believed that the awful toxins that defined society in our youths were, while not yet dead or even nearly dead, clearly dying.</p>
<!-- more-->
<p>We thought that the horrible treatment of people with brown skins, women, gay people, Irish people, Roma, people with the wrong religion or from far away and generally anyone who did not conform to some grey english stereotype was fading away. We thought this because we saw the acceptance of difference everywhere: we went to gigs where the audience was not all of one tribe, we walked through the park and saw couples where one person had come from Poland and the other’s parents from Pakistan. It seemed as if there was hope and, gradually, we stopped worrying.</p>
<p>We were wrong. England is not suddenly becoming a nation of bigots after the brexit vote: it has always been a nation of bigots. All that has changed is that now the bigots are saying what they have always thought.</p>
<!-- ## God save the queen-->
<p>The conventional thing to say about people who voted for brexit is that they were, on the whole, well-intentioned but a bit dim: they were fooled by a group of malignant politicians with their various agendas into voting for something very clearly against their own interests. In other words, stupid people voted for brexit, bamboozled by clever (or, at least, highly educated) people like Boris Johnson & demagogues like Farage.</p>
<p>While Johnson, & especially Farage, have a lot to answer for, I don’t think this is true. Apart from anything else it is insulting the intelligence of people who voted for brexit: I just don’t believe that a huge number people were stupid in that way. I don’t think that people ever really believed the lies that the leave campaign told: rather, they used these lies as a shield to conceal what they were really after. They were probably concealing it even from themselves in the way we all do; but they knew what they wanted.</p>
<p>What did they want? I think they wanted to live in the past: a version of the England that sputtered and failed at the end of the 1970s: the England that many of them grew up in — that I grew up in.</p>
<p>Well, that seems fairly uncontroversial: this is more-or-less what people <em>said</em> they wanted, isn’t it? To go back to a world before the EU, where people had jobs for life in the pits and steelworks, where England was a great country. Of course that world never really existed outside of television, but we can fix that: we can mix the best of the old with the best of the new, right?</p>
<!-- ## It's a fascist regime-->
<p>But of course, there was rather more to that vanished dream of England than that: quite a lot more in fact. It was a world where black people were treated as second-class citizens at best, and where if they objected then the police would beat them up. Quite often the police would beat them up just for fun. Women were the playthings and servants of men, and no-one really cared if they had a few bruises. Gay people were treated, if anything, even worse than black people. Somehow, sex with children was tolerated, at least if the people doing it were white: it was officially not OK, of course, but everyone knew it went on and nobody did anything about it.</p>
<p>And of course everyone (well, everyone English, which is the same thing) knew that England was the greatest country in the world: Scotland and Wales were just parts of England, European countries were convenient holiday destinations inhabited by people who were greasy and often suspiciously dusky or humourless German-types who were busy eating each other and burning Jews (English people wouldn’t do anything so vulgar, of course, although they certainly didn’t want any Jewish people in their golf club). They didn’t have to be taken very seriously. And as for the rest of the world, well, the empire was a recent memory, and they definitely knew their place, which was serving proper English people.</p>
<!-- ## No future-->
<p>The people who voted brexit want a return to the past, and they want <em>all of it</em>: perhaps they are not saying it quite yet, but I think it is terribly clear that this is what they want. You don’t have to read very hard to interpret what Farage is saying, or what people who talk about ‘the white working class’ mean.</p>
<p>Why? Why have people suddenly changed? It seems impossible that such a vast change in attitudes happen so quickly. Indeed, it <em>is</em> impossible: such a change in attitudes hasn’t happened, because the attitudes that people now evince <em>are the attitudes that they have always had</em>. For thirty years the people we all now despise as ‘the liberal elite’<sup><a href="#2016-10-14-no-future-footnote-1-definition" name="2016-10-14-no-future-footnote-1-return">1</a></sup> fooled themselves that education and cultural change were gradually making bigotry and xenophobia a thing of the past, but in fact all they did was was to make it, for a while, impossible to say what you think.</p>
<p>There is no future.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2016-10-14-no-future-footnote-1-definition" class="footnote-definition">
<p>‘Liberal elite’ will soon start to mean the same as ‘underground resistance’: members of the ‘liberal elite’ will meet in fear in back rooms while uniformed faragists patrol the streets, hanging suspected and real liberal elitists on meat hooks. <a href="#2016-10-14-no-future-footnote-1-return">↩</a></p></li></ol></div>Attacks on financial market infrastructureurn:https-www-tfeb-org:-fragments-2016-07-26-attacks-on-financial-market-infrastructure2016-07-26T12:10:30Z2016-07-26T12:10:30ZTim Bradshaw
<p>A recent article in The Economist talks about a plausible attack on the financial system: <a href="http://www.economist.com/node/21701928">If financial systems were hacked: Joker in the pack</a>. I liked this article, although I think it was a little naïve in two ways.</p>
<!-- more-->
<p><strong>Firstly</strong> it wasn’t clear enough that the ‘recover from a serious incident in two hours’ claim is fantasy. Of course everyone would <em>like</em> to be able to do that and will state to regulators that they can do so, and perhaps some people in the organisations concerned really believe that they can do so. And there are mechanisms in place (DR systems, business continuity volumes and so on) which, <em>for a suitably nice incident</em>, will indeed allow very rapid recovery if everyone is on the ball. But for the sort of incidents described in the article — for instance an incident where you don’t trust your data and soon realise that all your backups for some unknown but long interval are also suspect — the recovery time is likely to be <em>much</em> longer than two hours. Indeed, the important question would be whether recovery is possible at all. There have been much smaller incidents, not caused by malice, where complete recovery was never achieved in the sense that some transactions were lost altogether: there is no reason to assume that full recovery is even possible from a really major attack.</p>
<p><strong>Secondly</strong> and more seriously the article perpetrates the myth of ‘state sponsored actors’: the assumption being that only with the resources of a state would such an attack be possible, and since even malignant states have no interest in this kind of chaos these attacks are not a real worry. This is a touchingly 1950s view: although everyone knows how to make, say, a fission weapon, to actually make one you need to be able to mine huge quantities of ore, run vast numbers of centrifuges and so on, <em>and do this secretly and securely</em>, and only states have that kind of ability. The argument seems to be that breakng into computer systems is somehow a similarly industrial enterprise: perhaps you need vast caverns with serried ranks of hacker drones, relentlessly typing billions of lines of code or something, or enormous super-powerful computers to brute-force encryption. Well, of course, you don’t: you need a small number (possibly one) of sufficiently motivated people with the right skills who can find and exploit a weakness — probably a human weakness — in the system rather than launching the primitive industrial-scale brute-force attack that seems to be what the article imagines. And while states may not be interested in chaos, these tiny groups may well be.</p>
<p>In summary: it’s a good article but it understates the consequences of such attacks, and misrepresents the likely attackers in a way which makes such attacks seem much less plausible.</p>
<p>I hope that these confusions exist only in the minds of journalists, but I fear that the people actually responsible for the security of financial infrastructure also believe them, or at least pretend to do so as such beliefs are very convenient. I have certainly heard both myths repeated by people who ought to know better.</p>
<hr />
<p>This is derived from <a href="https://www.schneier.com/blog/archives/2016/07/the_economist_o_5.html?nc=7#comment-6729175">a comment I made on an article in Bruce Shneier’s blog</a>, in turn based on some personal experience in the financial services industry.</p>The end of summerurn:https-www-tfeb-org:-fragments-2016-06-26-the-end-of-summer2016-06-26T18:56:20Z2016-06-26T18:56:20ZTim Bradshaw
<p>On midsummer’s eve 2016 old people in the UK demonstrated that, by a significant majority, they are xenophobic leeches who are happy to suck the life out of their children and grandchildren, and have now found a way of continuing to do so even after they are dead.</p>
<!-- more-->
<p>The <a href="http://www.bbc.co.uk/news/magazine-36619342">demographics are very clear</a>:</p>
<ul>
<li>27% of those aged 18–24 wanted to leave;</li>
<li>28% of those aged 25–34 wanted to leave;</li>
<li>48% of those aged 35–44 wanted to leave;</li>
<li>56% of those aged 45–54 wanted to leave;</li>
<li>57% of those aged 55–64 wanted to leave;</li>
<li>60% of those aged 65 or older wanted to leave.</li></ul>
<p>People born in the UK in 1962 or earlier were likely to vote leave, while people born in 1982 or after were very likely to vote stay (I was born in 1962).</p>
<p>But perhaps older people are just demonstrating their superior wisdom, and leaving is the right thing to do? In what sense could it be right?</p>
<p>Well, it certainly is going to make people who live in the UK a lot poorer. The <a href="http://www.economist.com/news/britain/21696517-most-estimates-lost-income-are-small-risk-bigger-losses-large-economic">economics are not in doubt</a>: no credible economist thinks that the results of a British exit from the EU will be good. And indeed <a href="http://www.bbc.co.uk/news/business-36611512">the pound collapsed</a> immediately following the result, and <a href="http://www.bbc.co.uk/news/business-36644934">the UK’s credit rating was lowered</a> shortly after that. <a href="http://www.economist.com/news/leaders/21701265-how-minimise-damage-britains-senseless-self-inflicted-blow-tragic-split">There will probably be a recession</a> and the results are likely to be long-lived. This will particularly hit the poor, and of course the cost of this catastrophe enormously outweighs the funding we were providing to the EU.</p>
<p>They have also voted to destroy the United Kingdom they profess to love: Scotland will now almost certainly leave the UK as <a href="http://www.bbc.co.uk/news/uk-scotland-scotland-politics-36621030">the SNP are calling for a second referendum on Scottish independence</a>. I lived in Scotland for 22 years, and was strongly against independence in the last referendum: I would vote for it now, and I imagine it will be a landslide. This will mean that the ‘United Kingdom’ is in fact England in all but name (if you they will care about Wales, and still less Northern Ireland, think again). It will also have a new <em>land border</em> with the EU.</p>
<p>But it’s a matter of <em>democracy</em>: we in
<s>the UK</s>England will now be our own masters, free from awful undemocratic EU practices. Well, let’s leave aside that the EU isn’t actually undemocratic (the ‘unelected’ commissioners are in fact appointed by representatives of the elected governments of the countries which make up the EU, which is at least as democratic as the way the government of the UK is appointed): this just isn’t true. Assuming we’d like to trade with the EU on reasonably favourable terms we’re going to need to agree to their rules, except that, now, we don’t get a say in what those rules are. This is not more democratic: it’s less.</p>
<p>But, they say, we didn’t know any of this last Wednesday! Old people voted in good faith, believing in a bright new future as promised by the leave campaign. Don’t be silly: the leave campaign lied consistently and it was common knowledge that they were lying. For instance take the ’£350 million a week’ figure: the UK Statistics Authority <a href="https://www.statisticsauthority.gov.uk/news/uk-statistics-authority-statement-on-the-use-of-official-statistics-on-contributions-to-the-european-union/">debunked this</a> a month before the referendum, and this was <a href="http://www.independent.co.uk/news/business/news/eu-referendum-statistics-regulator-loses-patience-with-leave-campaign-over-350m-a-week-eu-cost-a7051756.html">widely</a> <a href="http://www.theguardian.com/politics/2016/may/27/uk-statistics-chief-vote-leave-350m-figure-misleading">reported</a> at the time and later. <em>Everyone knew that the leave campaign was built on lies</em>. Everyone knew it would make us poorer, everyone knew the UK would fragment.</p>
<p>Perhaps not everyone knew that <a href="http://www.economist.com/blogs/bagehot/2016/06/anarchy-uk">the leave campaign had no plans at all</a>:</p>
<blockquote>
<p>On live television Faisal Islam, the political editor of SkyNews, was recounting a conversation with a pro-Brexit Conservative MP. “I said to him: ‘Where’s the plan? Can we see the Brexit plan now?’ [The MP replied:] ‘There is no plan. The Leave campaign don’t have a post-Brexit plan…Number 10 should have had a plan.’” The camera cut to Anna Botting, the anchor, horror chasing across her face. For a couple of seconds they were both silent, as the point sunk in. “Don’t know what to say to that, actually,” she replied, looking down at the desk.</p></blockquote>
<p>They don’t just act like upper-class buffoons: they are upper-class buffoons.</p>
<p>Finally let’s debunk one more myth: that immigrants are a great cost to the country. No, they aren’t, and in fact they are <a href="http://www.economist.com/news/britain/21631076-rather-lot-according-new-piece-research-what-have-immigrants-ever-done-us">a significant economic benefit to the country</a>:</p>
<blockquote>
<p>Between 2001 and 2011, the net fiscal contribution of recent arrivals from the eastern European countries that have joined the EU since 2004 has amounted to almost £5 billion. […] Immigrants’ overall positive contribution is explained in part by the fact that they are less likely than natives to claim benefits or to live in social housing.</p></blockquote>
<p>Immigrants are <em>less</em> likely to claim benefits and <em>less</em> likely to live in social housing than natives. Especially, of course, than older natives, who contribute little and consume enormous resources from the health service.</p>
<p>It is very simple: the predominantly older people who voted leave did so because <em>they don’t like foreigners</em>, especially those whose skin is dark: they are at least xenophobic and usually straightforwardly racist. There has already been <a href="http://www.bbc.co.uk/news/uk-politics-eu-referendum-36643213">an increase in racist attacks</a> following the referendum and this will get much worse. They are also selfish: they voted leave even though they knew it would seriously damage the future of young people, <em>because it would not damage theirs</em>. They’re old: they don’t <em>have</em> long futures, and in many cases they have already left the workforce and are living on their pensions.</p>
<p>The older people who voted leave were the greatest winners of the post-war period: they had the NHS, free higher education, stable jobs, pension schemes that worked, and benefited from the long housing boom in the UK. These are people who have done very well out of the country they live in. But they care only about themselves: they don’t like foreign people and, since it has no cost to them to do so, have turned around and savaged the country that gave them everything they have. They have sacrificed the futures of their own children and grandchildren so that they don’t need to see so many foreign faces.</p>
<p>Despite what <a href="https://www.theguardian.com/commentisfree/2016/jun/25/brexit-rift-feelings-honest">some people think</a> this decision was not made by children, childishly expressing feelings they do not understand about their new sibling who need to be soothed and appeased by their parents: these are adults who consciously chose to eat their own grandchildren. There can be no excusing this.</p>
<p>We have always been told to respect our elders: we should help them across the road, give up our seats on trains for them, visit them in their declining years, listen to their advice. I am trying to think why anyone should continue doing this, and I can’t.</p>
<p>Eat the old.</p>Python instead of Lispurn:https-www-tfeb-org:-fragments-2016-06-09-python-instead-of-lisp2016-06-09T18:43:40Z2016-06-09T18:43:40ZTim Bradshaw
<p>Lots of people, even <a href="http://norvig.com/python-lisp.html">famous Lisp hackers</a>, like to claim that ‘Python can be seen as a dialect of Lisp with “traditional” syntax’.</p>
<p>Being famous does not make them right.</p>
<!-- more-->
<h2 id="python-is-nothing-like-lisp">Python is <em>nothing like</em> Lisp</h2>
<p><strong>Expression language.</strong> Lisp is an expression language: everything in the language is an expression and has a value, and there is no distinction between expressions and statements, because there are no statements. Python is not: it has expressions, such as <code>2+3</code>, <code>lambda x: x*2</code> and statements such as <code>x = 3</code>. If expressions and statements are different things then writing macros and any kind of general-purpose <code>lambda</code> becomes very difficult.</p>
<p><strong>Conses.</strong> Lisp has conses, Python does not. Conses are not everything<sup><a href="#2016-06-09-python-instead-of-lisp-footnote-1-definition" name="2016-06-09-python-instead-of-lisp-footnote-1-return">1</a></sup>, but unless you have them you can’t implement them reasonably, and they are extremely useful data structures for many purposes. In particular for conses to be useful you need two things:</p>
<ul>
<li>a good syntax for them and for lists built from them;</li>
<li>good performance — conses should be extremely cheap, so you can’t implement them as a special case of some heavyweight data structure such as a Python list, because there is an enormous header.</li></ul>
<p>This means that conses need to be wired into the language: you can’t take a language without conses and add them, because even if you can get the first (you can’t in Python) you can’t get the second.</p>
<p><strong>Symbols.</strong> Lisp has symbols, Python does not. You can use strings, and this works sometimes.</p>
<p><strong>Lambda.</strong> Lisp has lambda, Python has an extremely limited version. Not being an expression language (see above) and the lack of scoping and block constructs in Python cripples its lambda.</p>
<p><strong>Source code available as a low-commitment data structure.</strong> Lisp has this, Python does not. ‘Low-commitment’ means that it is available before it has been decided what it means, but after it has been turned from a stream of characters into something more interesting. This matters because it makes macros possible: macros which work by transforming streams of characters are doomed to the sort of unspeakable horror of which <a href="http://jinja.pocoo.org/">Jinja2</a> is a good example, while macros which work after it has been decided what the code means then can’t make their <em>own</em> decision about what it means, which is half the point of macros.</p>
<p><strong>Scoping.</strong> Lisp has a multiplicity of scoping constructs and all modern Lisps have lexical scope, with some (Scheme) extending this to control constructs. Binding and assignment are irreparably confused in Python: scope does not work properly and this can never be fixed. A language which requires a <code>global</code> declaration is not going to be fixed by adding <code>nonlocal</code>.</p>
<p><strong>Macros.</strong> Lisp has them, Python doesn’t. Since macros are <em>the point</em> of Lisp, it is really hard to see how the above quote makes any kind of sense.</p>
<p>There is a terrible truth about the percieved arrogance of Lisp hackers that it has taken me a long time to understand. The arrogance is justified: Lisp is, in fact, a better programming language.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2016-06-09-python-instead-of-lisp-footnote-1-definition" class="footnote-definition">
<p>In particular conses are not a useful universal data structure in the way that, perhaps, early Lisp people thought they were. <a href="#2016-06-09-python-instead-of-lisp-footnote-1-return">↩</a></p></li></ol></div>English as she is spokeurn:https-www-tfeb-org:-fragments-2016-01-15-english-as-she-is-spoke2016-01-15T22:09:38Z2016-01-15T22:09:38ZTim Bradshaw
<p>I sometimes make the mistake of reading the letters pages of newspapers.</p>
<!-- more-->
<p>They are a sort of mid–20th-century version of reddit with slightly less overt bigotry but a much greater sense of entitlement. And, in 2016, you can still find people writing things like <a href="http://www.theguardian.com/science/2016/jan/13/ay-up-why-the-stress-on-indefinite-articles">this</a>:</p>
<blockquote>
<p>What is driving me crazy is that nowadays everybody, including professionals all over the BBC and other channels, says “ay” all the time, instead of the correct short “a”.</p></blockquote>
<p>The right questions to ask about statements like this are ‘who decided that short “a”<sup><a href="#2016-01-15-english-as-she-is-spoke-footnote-1-definition" name="2016-01-15-english-as-she-is-spoke-footnote-1-return">1</a></sup> was correct, and what authority did they have so to do?’ The answers are ‘it doesn’t matter’ and ‘none’.</p>
<p>English<sup><a href="#2016-01-15-english-as-she-is-spoke-footnote-2-definition" name="2016-01-15-english-as-she-is-spoke-footnote-2-return">2</a></sup> is a <em>natural</em> language: it is not defined by a self-appointed standards body but instead is an evolving collection of closely-related languages<sup><a href="#2016-01-15-english-as-she-is-spoke-footnote-3-definition" name="2016-01-15-english-as-she-is-spoke-footnote-3-return">3</a></sup> which is <em>defined by its users</em> — by the people who speak it, with the written form (which is often not particularly closely related to the spoken form) defined by the people who write it.</p>
<p>If enough speakers of English decide to pronounce a word one way rather than another then <em>that is the correct pronunciation of that word in the language they speak</em>. I may not like it and you may not like it, but we don’t get to decide how they speak. We may even belong to a community which pronounces the word a different way, and our English is then just another member of the family of Englishes: one which may flourish or may die out.</p>
<p>It is very easy to see that this is true: consider the languages of Shakespeare and of Chaucer, both of which can be called English. The written language of Shakespeare is largely, but far from completely, comprehensible to a modern English speaker: the spoken language probably would not be. The written language of Chaucer is largely *in*comprehensible to a modern English speaker, and the spoken language certainly would be incomprehensible. Are their Englishes correct, while ours are wrong? No, they are different, because the language has changed, and <em>continues to change today</em>. Go and listen to BBC announcers from the 1940s if you doubt this: yes, that really is how (some) people spoke then.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2016-01-15-english-as-she-is-spoke-footnote-1-definition" class="footnote-definition">
<p>‘Short “a”’ is really is what a linguist would call ə — schwa — I think. <a href="#2016-01-15-english-as-she-is-spoke-footnote-1-return">↩</a></p></li>
<li id="2016-01-15-english-as-she-is-spoke-footnote-2-definition" class="footnote-definition">
<p><a href="https://en.m.wikisource.org/wiki/English_As_She_Is_Spoke">As she is spoke</a>. <a href="#2016-01-15-english-as-she-is-spoke-footnote-2-return">↩</a></p></li>
<li id="2016-01-15-english-as-she-is-spoke-footnote-3-definition" class="footnote-definition">
<p>We like to call the members of the language family ‘dialects’ so we can privilege one member as being ‘standard English’ and pretend to ourselves that it is the language of which the others are merely dialects so we can look down on people who speak them: those who speak ‘black English’ or ‘with a regional accent’. This is not, of course, bigoted: those people — some of whom look suspiciously, well, <em>foreign</em>, if you know what I mean — just need to learn to speak proper English like I do. <a href="#2016-01-15-english-as-she-is-spoke-footnote-3-return">↩</a></p></li></ol></div>Macros in Racket, part three: checking boolean operatorsurn:https-www-tfeb-org:-fragments-2015-12-12-macros-in-racket-part-three2015-12-12T10:59:54Z2015-12-12T10:59:54ZTim Bradshaw
<p>I wanted to see if I could write a mildly complicated macro in <a href="http://racket-lang.org/">Racket</a> without becoming too confused. I can, although I am not sure it is terribly idiomatic.</p>
<p>This is the third part of a series on writing macros in Racket for someone used to Common Lisp, although it is mostly independent of the previous parts. The previous parts are <a href="../../../../2015/01/13/macros-in-racket-part-one/">part one</a> & <a href="../../../../2015/01/28/macros-in-racket-part-two">part two</a>.</p>
<!-- more-->
<p>One of the nice things about Lisp-family languages is that you can write your own control constructs, and it’s essentially easy to do so: if <code><a href="http://docs.racket-lang.org/reference/when_unless.html#(form._((lib._racket/private/letstx-scheme..rkt)._when))" style="color: inherit">when</a></code> did not exist then you could write it:</p>
<pre><code>(define-syntax-rule (when test form ...)
(and test
(begin form ...)))</code></pre>
<p>This kind of extensibility is one of the wonders of Lisp and Scheme: it’s tempting to say that it makes them better than programming languages which can’t do this but that’s not correct: it makes them <em>incomparable</em> to such languages: Lisp<sup><a href="#2015-12-12-macros-in-racket-part-three-footnote-1-definition" name="2015-12-12-macros-in-racket-part-three-footnote-1-return">1</a></sup> programs can reason about <em>themselves</em> and often do<sup><a href="#2015-12-12-macros-in-racket-part-three-footnote-2-definition" name="2015-12-12-macros-in-racket-part-three-footnote-2-return">2</a></sup>. Everything about Lisp really leads to this ability.</p>
<p>When I taught (Common) Lisp to people one of the things I would try to get across was this ability of macros to extend the control constructs in the language: people often thought of macros as a way of essentially inlining code<sup><a href="#2015-12-12-macros-in-racket-part-three-footnote-3-definition" name="2015-12-12-macros-in-racket-part-three-footnote-3-return">3</a></sup>, but that’s not what they’re actually good for. If you can add control constructs to your language, then you can make a <em>new language</em>, and <em>that’s</em> what Lisp macros are about, and therefore what <em>Lisp</em> is about.</p>
<p>A good way to get this across to people is to pretend that Lisp doesn’t have some control construct, and write it as a macro. This is easier than inventing new control constructs both because it doesn’t require thinking of a domain where they might be useful and because the existing control constructs have clear semantics. Reimplementing existing control constructs also demonstrates how the language is already built up from a more primitive language by macros and how the approach to solving problems in Lisp is to <em>design and implement a language</em> in which to talk about the problem, where that language is seamlessly built on the underlying Lisp, and can inherit all of its power and flexibiliy, <em>including the ability to extend the language</em>.</p>
<p>An advantage of reimplementing existing control constructs for teaching Lisp is that you can compare the new construct to the existing one, and with some small constraints you can do this exhaustively, so you can know whether you have actually implemented it right. This is, obviously, not possible in general, but if the operator has trivial syntax (so not <code><a href="http://docs.racket-lang.org/reference/if.html#(form._((lib._racket/private/letstx-scheme..rkt)._cond))" style="color: inherit">cond</a></code>) and if you limit the arguments of the operator to booleans then you can enumerate all the possible arguments in the obvious way, and so long as it returns a result for all combinations of arguments (does not fail to halt in other words) and is deterministic then there are only two things you need to check:</p>
<ol>
<li>does the operator produce the same result for all combinations of arguments (\(2^n\) possibilities for \(n\) arguments) as the existing one?</li>
<li>does the operator evaluate its arguments the same number of times as the existing one for all these combinations?</li></ol>
<p>So, for instance, <code><a href="http://docs.racket-lang.org/reference/if.html#(form._((quote._~23~25kernel)._if))" style="color: inherit">if</a></code> takes three arguments (in Racket) and should evaluate the first exactly once, and the others at most once, as well as returning the correct value.</p>
<p>Obviously such a check is not a full check of the operator — it does not tell you what it does with non-boolean arguments for instance. But I was interested in writing the check largely because it’s clearly a reasonably hairy macro which I know how to write in CL and wanted to see if I could write in Racket (I’m not very likely to teach people Lisp again).</p>
<h2 id="what-the-macro-needs-to-do">What the macro needs to do</h2>
<p>The idea is that to compare two boolean operators <code>o1</code> and <code>o2</code> which take <code>n</code> arguments you need to generate code which looks like this:</p>
<pre><code>(for/and ([c (expt 2 n)])
(let ([a1 (bitwise-bit-set? c 0)] ...)
(let ([o1c1 0] ...)
(let ([o2c1 0] ...)
(and (eq? (o1 (begin (set! o1c1 (+ o1c1 1)) a1) ...)
(o2 (begin (set! o2c1 (+ o2c1 1)) a1) ...))
(= o1c1 o2c1) ...)))))</code></pre>
<p>So <code>a1</code> is the first argument, <code>o1c1</code> counts how many times <code>o1</code> evaluates it, and <code>o2c1</code> counts how many times <code>o2</code> evaluates it, and so on. I decided to compare the operators with <code><a href="http://docs.racket-lang.org/reference/Equality.html#(def._((quote._~23~25kernel)._eq~3f))" style="color: inherit">eq?</a></code> rather than <code><a href="http://docs.racket-lang.org/reference/Equality.html#(def._((quote._~23~25kernel)._eqv~3f))" style="color: inherit">eqv?</a></code> for no very good reason except that it works for operators whose results are booleans, which is what I was interested in. I should almost certainly use <code>eqv?</code> I think — certainly the <code>-equivalent</code> in the name would imply that — but I’m not.</p>
<p>It’s clear that a loop like that checks all of the \(2^n\) possibilities for the arguments, where each argument can be either <code>#f</code> or <code>#t</code> only. So this does an exhaustive check of all the possibilities, and provided <code>o1</code> and <code>o2</code> are deterministic and halt on all their arguments it will tell you whether they are equivalent.</p>
<p>And finally, this must be written as a macro, because the operators it is testing are themselves not generally functions: in particular things like <code><a href="http://docs.racket-lang.org/reference/if.html#(form._((quote._~23~25kernel)._if))" style="color: inherit">if</a></code> and <code><a href="http://docs.racket-lang.org/reference/if.html#(form._((lib._racket/private/letstx-scheme..rkt)._or))" style="color: inherit">or</a></code> are obviously themselves not functions.</p>
<h2 id="things-i-did-not-know-how-to-do">Things I did not know how to do</h2>
<p>The big thing I didn’t know how to do here was to make up new identifiers: all the counters need to be created, and possibly also the argument names. In CL you’d do this with <code>make-symbol</code> or <code>gensym</code> or something like that. Assuming I want to use <code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/stxcase-scheme..rkt)._syntax-case))" style="color: inherit">syntax-case</a></code> rather than writing a CL-style construct-the-form-with-backquote-and-use-<code><a href="http://docs.racket-lang.org/reference/stxops.html#(def._((quote._~23~25kernel)._datum-~3esyntax))" style="color: inherit">datum->syntax</a></code> macro (which I very much do want to do) then there are two problems:</p>
<ol>
<li>constructing the names of the counters;</li>
<li>making them available as pattern variables.</li></ol>
<p>Well, (2) is easy: you can use nested <code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/stxcase-scheme..rkt)._syntax-case))" style="color: inherit">syntax-case</a></code>s, or equivalently but much more prettily, <code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/stxcase-scheme..rkt)._with-syntax))" style="color: inherit">with-syntax</a></code> to bind the pattern variables. And it turns out that <code>with-syntax</code> is willing to do a lot of work on your behalf: if you give it something which is not a syntax object it will massage it into one for you. So, in particular, this works:</p>
<pre><code>(with-syntax ([(o1c ...) (list ...)])
...)</code></pre>
<p>It takes the list it is given, turns it into a syntax object (with <code>datum->syntax</code> I suppose) and then does the matching. So you can be really lazy here: all you need to invent is a list of identifier syntax objects, and <code>with-syntax</code> will do the rest, making the program a lot less noisy. This is a really neat feature, although it might lead you to get confused about what is, and what is not, a syntax object I suppose. Anyway, I used it ruthlessly.</p>
<p>So this leaves (1). You could obviously do this with something like <code>(datum->syntax ctx (string->symbol (format ...)))</code>, but Racket provides a nice shorthand for that in the form of <code><a href="http://docs.racket-lang.org/reference/syntax-util.html#(def._((lib._racket/syntax..rkt)._format-id))" style="color: inherit">format-id</a></code>: <code>(format-id ctx "~a-count" v)</code> will construct an identifier syntax object from <code>v</code> using <code>ctx</code> as lexical context. And it will do the appropriate magic if <code>v</code> is an identifier syntax object: extract the symbol from it and use it as the argument to <code><a href="http://docs.racket-lang.org/reference/Writing.html#(def._((quote._~23~25kernel)._format))" style="color: inherit">format</a></code> in the appropriate way.</p>
<p>So it looks pretty straightforward to construct lists of identifiers and bind them to pattern variables. The final thing that confuses me is what lexical context to use for the identifiers. The macro should be hygenic, which means they <em>can’t</em> have the context of the syntax object it is working on, but I think can have more-or-less any other context where they have no existing meaning: I just invented an object for them, which I think is safe, although I am a bit confused about this.</p>
<h2 id="what-users-see">What users see</h2>
<p>I spent a really long time stuck on what the syntax of the macro should be: this is entirely stupid because it just does not matter that much. The reason I got stuck is that it <em>would</em> matter if this was a real library and I am constitutionally incapable of writing things without worrying about that kind of thing. Eventually I decided that it would be best if the user provided the argument names as a list, because they generally make sense to users and because I didn’t want to get into something which looked as if you could pass it an integer when in fact what it needs is a <em>literal</em> integer. So I decided on a syntax like this:</p>
<pre><code>(boolean-operators-equivalent? o1 o2 (a1 ...))</code></pre>
<p>So, for instance:</p>
<pre><code>(boolean-operators-equivalent? if my-if (test then else))</code></pre>
<p>I still don’t really like this; but I’m just playing so, well, it will do.</p>
<h2 id="additional-cleverness">Additional cleverness</h2>
<p>I wanted to report syntax errors in a reasonable way: apparently the proper way to do this is using <code><a href="http://docs.racket-lang.org/syntax/Parsing_Syntax.html#(form._((lib._syntax/parse..rkt)._syntax-parse))" style="color: inherit">syntax-parse</a></code> but I am not ready to understand that yet, so I used <code><a href="http://docs.racket-lang.org/reference/syntax-util.html#(def._((lib._racket/syntax..rkt)._wrong-syntax))" style="color: inherit">wrong-syntax</a></code> and the <code><a href="http://docs.racket-lang.org/reference/syntax-util.html#(def._((lib._racket/syntax..rkt)._current-syntax-context))" style="color: inherit">current-syntax-context</a></code> parameter to get reasonable-looking errors.</p>
<p>I thought it would be nice to be able to report failures of equivalence, so there is a parameter which controls that and the expansion of the macro includes a check for the parameter and prints the failed cases if it’s true. All this happens at run time (phase 0) of course.</p>
<h2 id="the-macro-itself">The macro itself</h2>
<p>So, finally, here it is.</p>
<pre><code>(require (for-syntax (only-in racket/syntax format-id
current-syntax-context wrong-syntax)))
(define boe-report-failure? (make-parameter #f))
(define-syntax (boolean-operators-equivalent? stx)
;; Given the names of two boolean operators and a list of argument
;; names, expand to a form which tests that they are equivalent, by
;; evaluating the with arguments bound to all the combinations of #t
;; and #f, and also checking that they evaluate the same arguments
;; in each case.
;;
(parameterize ([current-syntax-context stx])
(syntax-case stx ()
[(_ o1 o2 (v ...))
(let* ([vars (syntax->list #'(v ...))]
[nvars (length vars)])
;; This check could be a guard, but we need the bindings
;; anyway, so.
(for ([var vars])
(unless (identifier? var)
(wrong-syntax var "not an identifier")))
;; vars is now a list of identifiers, and nvars is how many
;; there are. We need to construct syntax for check
;; variables for each var and and operator, as well as
;; construct 2^n and a list of bit numbers.] This is being
;; fairly fast and loose: it turns out that various things
;; get automagically converted into syntax objects, and I
;; have not cared about the context for numbers (what is
;; it?). In general I am a bit confused about what the
;; context should be here, but it clearly should *not* be
;; stx.
;;
(with-syntax ([(o1c ...) (for/list ([v vars])
(format-id #'boe "~a-1-eval-count" v))]
[(o2c ...) (for/list ([v vars])
(format-id #'boe "~a-2-eval-count" v))]
[2^n (expt 2 nvars)]
[(b ...) (for/list ([i nvars]) i)])
;; And now just write the pattern we want. '...' is pretty
;; clever, it turns out
#'(for/and ([c 2^n])
(let ([v (bitwise-bit-set? c b)] ...)
(let ([o1c 0] ...)
(let ([o2c 0] ...)
(or (and (eq? (o1 (begin (set! o1c (+ o1c 1)) v) ...)
(o2 (begin (set! o2c (+ o2c 1)) v) ...))
(= o1c o2c) ...)
(begin
(when (boe-report-failure?)
(eprintf "Not equivalent:~% ~a~% ~a~%"
(list 'o1 `(,v ,o1c) ...)
(list 'o2 `(,v ,o2c) ...)))
#f))))))))]
[else
(wrong-syntax #'else "expecting o1 o2 (a1 ...)")])))</code></pre>
<p>To my astonishment, this worked pretty much first time (it did not initially have the <code>wrong-syntax</code> stuff, but this was easy compared to the rest of it):</p>
<pre><code>> (define-syntax-rule (if/broken test then else)
(or (and test then) else))
> (boe-report-failure? #t)
> (boolean-operators-equivalent? if if/broken (test then else))
Not equivalent:
(if (#t 1) (#f 1) (#f 0))
(if/broken (#t 1) (#f 1) (#f 1))
#f</code></pre>
<p>The macro, complete with some tests and other infrastructure can be found <a href="https://gist.github.com/tfeb/3d535a2fc755e4ee5dfb">here</a><sup><a href="#2015-12-12-macros-in-racket-part-three-footnote-4-definition" name="2015-12-12-macros-in-racket-part-three-footnote-4-return">4</a></sup>.</p>
<h2 id="notes-and-queries">Notes and queries</h2>
<p>I still don’t know whether this is really idiomatic Racket, although I am reasonably happy that I understand what is going on. There are a couple of things I am not sure about:</p>
<ul>
<li>is the context for the count variables right? I think it is, but I am not sure;</li>
<li>the macro relies heavily on Racket’s extremely smart behaviour with <code>...</code> — I am still unclear just <em>how</em> smart this is and whether I am relying on things which are not actually specified to happen;</li>
<li>similarly it relies on <code>with-syntax</code> being willing to convert things to syntax objects for you, which I am not sure is safe.</li></ul>
<p>However, even with these worries, I think it’s pretty clear that Racket macros are significantly nicer than CL macros, if also significantly more opaque.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2015-12-12-macros-in-racket-part-three-footnote-1-definition" class="footnote-definition">
<p>I am going to use ‘Lisp’ to mean ‘Lisp-family’ from now on. This is not meant to denigrate Scheme — this post is about Racket, after all — I just need a term which is not too clumsy. <a href="#2015-12-12-macros-in-racket-part-three-footnote-1-return">↩</a></p></li>
<li id="2015-12-12-macros-in-racket-part-three-footnote-2-definition" class="footnote-definition">
<p>Of course, programs in other languages often do end up reasoning about themselves: people end up writing little languages all the time. But you only have to look at most examples of this sort of thing to realise how far ahead Lisp is: I’m currently having to deal with a system whose configuration files are in a mutant version of Windows ini file syntax, with a preprocessor which is entirely unaware of that syntax, and an entire other language which lives <em>in strings in the base language</em>. The preprocessor does not know about the string syntax so it pokes down into this inner language as well. I’d like to say that <a href="https://en.wikipedia.org/wiki/Greenspun's_tenth_rule">Greenspun’s tenth law</a> applies, but that would imply a level of sophistication entirely missing in this horrible thing: all I want to do is leave this job and never think about it again. <a href="#2015-12-12-macros-in-racket-part-three-footnote-2-return">↩</a></p></li>
<li id="2015-12-12-macros-in-racket-part-three-footnote-3-definition" class="footnote-definition">
<p>Macros were often used to inline code in the days of primitive compilers of course, but that’s a long time ago now. <a href="#2015-12-12-macros-in-racket-part-three-footnote-3-return">↩</a></p></li>
<li id="2015-12-12-macros-in-racket-part-three-footnote-4-definition" class="footnote-definition">
<p>I may move it somewhere more permanent in due course, so bookmark this at your peril. <a href="#2015-12-12-macros-in-racket-part-three-footnote-4-return">↩</a></p></li></ol></div>The weakest passwords you can get away withurn:https-www-tfeb-org:-fragments-2015-10-14-the-weakest-passwords-you-can-get-away-with2015-10-14T16:55:22Z2015-10-14T16:55:22ZTim Bradshaw
<p>Or: why password strength checkers are useless.</p>
<!-- more-->
<p>A lot of people work in environments where they have to change password every few months, and where there are restrictions on what passwords must look like. Here is how to deal with that, if you don’t care about security.</p>
<ol>
<li>Pick two strings which are complicated enough to keep the password checker happy, which I’ll call \(s_1\) and \(s_2\). Remember them.</li>
<li>Also remember a two-digit count, starting from \(00\).</li>
<li>The first password is \(0s_10\), the second is \(0s_20\), the third is \(0s_11\), the fourth \(0s_21\) and so on: each time you need to change passwords you swap between the two strings, and every <em>other</em> time you increment the count.</li></ol>
<p>This gives you two hundred passwords, at the cost of remembering two strings and a two-digit count: if you have to change password every three months this will last you fifty years.</p>
<p>This works becaus the thing that is forcing you to change password can know two things:</p>
<ul>
<li>the current and new passwords, in plain;</li>
<li>the hashes of all your previous passwords.</li></ul>
<p>So what you need to ensure is that each password change changes enough to keep the checker happy, and that all the hashes are different. This algorithm achieves that, while also ensuring that you have to remember almost nothing. The count is wrapped around the strings just in case the checker is looking for things that look like they have trailing counts: you might need to obfuscate it in other ways if checkers get more clever<sup><a href="#2015-10-14-the-weakest-passwords-you-can-get-away-with-footnote-1-definition" name="2015-10-14-the-weakest-passwords-you-can-get-away-with-footnote-1-return">1</a></sup>.</p>
<p>Of course these passwords are terribly weak: if you know one of them you know half of them, and if you know any sequential pair you know all of them. But, if you don’t care about security but merely the appearance of security, you can use tricks like this.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2015-10-14-the-weakest-passwords-you-can-get-away-with-footnote-1-definition" class="footnote-definition">
<p>Counting in hex or base 36 is a good trick: the only thing that matters is to have something you can easily remember and which changes each time. <a href="#2015-10-14-the-weakest-passwords-you-can-get-away-with-footnote-1-return">↩</a></p></li></ol></div>Melting the Antarctic ice sheeturn:https-www-tfeb-org:-fragments-2015-10-10-melting-the-antarctic-ice-sheet2015-10-10T10:31:28Z2015-10-10T10:31:28ZTim Bradshaw
<p>How long might this take, in the worst case?</p>
<!-- more-->
<p>The Antarctic ice sheet has a volume of about \(26.5\times 10^6\,\mathrm{km}^3\), according to <a href="https://www.bas.ac.uk/project/bedmap-2/">Bedmap2</a><sup><a href="#2015-10-10-melting-the-antarctic-ice-sheet-footnote-1-definition" name="2015-10-10-melting-the-antarctic-ice-sheet-footnote-1-return">1</a></sup>. This is \(2.7\times 10^{16}\,\mathrm{m}^3\) of ice. The density of ice is about \(10^3 \,\mathrm{kg/m^3}\) (about a tonne per cubic metre, which is approximately the same as water of course), so this is about \(2.7\times 10^{19}\,\mathrm{kg}\) of ice. The <a href="https://en.wikipedia.org/wiki/Enthalpy_of_fusion">enthalpy of fusion</a> of water is about \(3.3\times10^5\,\mathrm{J/kg}\) so, if we assume that the ice is all at freezing point<sup><a href="#2015-10-10-melting-the-antarctic-ice-sheet-footnote-2-definition" name="2015-10-10-melting-the-antarctic-ice-sheet-footnote-2-return">2</a></sup>, then we require \(8.9\times 10^{24}\,\mathrm{J}\) to melt it all.</p>
<p>Let’s assume we use the Sun to do this. The <a href="https://en.wikipedia.org/wiki/Solar_constant">solar constant</a> is about \(1.4\times 10^3 \,\mathrm{W/m^2}\): this is the amount of power per square meter that the Sun provides at the top of the atmosphere. So, imagine we use <em>all</em> of the power that the Earth intercepts from the Sun to do this<sup><a href="#2015-10-10-melting-the-antarctic-ice-sheet-footnote-3-definition" name="2015-10-10-melting-the-antarctic-ice-sheet-footnote-3-return">3</a></sup>. Well, the Earth’s radius is about \(6.4\times 10^6\,\mathrm{m}\) so the total power available is about \(1.4\times 10^3 \times \pi \times (6.4\times 10^6)^2\,\mathrm{W} \approx 1.8\times 10^{17}\,\mathrm{W}\), or about \(1.8\times 10^{17}\,\mathrm{J/s}\).</p>
<p>So to melt the Antarctic ice cap, using all of the power from the Sun that reaches the top of the atmosphere would take</p>
<p>\[
\frac{8.9\times 10^{24}}{1.8\times 10^{17}}\,\mathrm{s}
= 4.9\times 10^7\,\mathrm{s}
\]</p>
<p>Well, there are about \(32\times 10^6\) seconds in a year, so this is about 1.5 years.</p>
<p>Of course we can’t use all the Sun’s power: even if we had the technology to do this (which we are <em>not anywhere near</em> doing!), doing this would cause an inconceivable catastophe for the rest of the planet: this would be a winter night which lasted for a year and a half. Everyone would die.</p>
<p>A plausible figure might be a tenth of one percent<sup><a href="#2015-10-10-melting-the-antarctic-ice-sheet-footnote-4-definition" name="2015-10-10-melting-the-antarctic-ice-sheet-footnote-4-return">4</a></sup>: in this case the Antarctic ice sheet would melt in about \(1500\) years.</p>
<hr />
<p>Please note: I am not arguing that melting ice sheets caused by anthropogenic climate change is not a problem: it is. For instance <em>there are more than \(70\,\mathrm{m}\) of sea level rise locked in the Antarctic ice sheet</em>: melting even a small fraction of this ice is catastropic. And melting is not the only problem: if significant parts of ice sheets end up as sea ice before melting, then the sea level rise can happen much faster. And sea level rise is just <em>one</em> of the problems caused by ice sheets melting.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2015-10-10-melting-the-antarctic-ice-sheet-footnote-1-definition" class="footnote-definition">
<p><a href="http://www.bbc.co.uk/news/science-environment-21692423">BBC news article on Bedmap2</a>, <a href="http://www.the-cryosphere.net/7/375/2013/tc-7-375-2013.pdf">paper (PDF)</a>. <a href="#2015-10-10-melting-the-antarctic-ice-sheet-footnote-1-return">↩</a></p></li>
<li id="2015-10-10-melting-the-antarctic-ice-sheet-footnote-2-definition" class="footnote-definition">
<p>The ice is, of course, far below freezing so the actual energy required will be much greater. <a href="#2015-10-10-melting-the-antarctic-ice-sheet-footnote-2-return">↩</a></p></li>
<li id="2015-10-10-melting-the-antarctic-ice-sheet-footnote-3-definition" class="footnote-definition">
<p>This is enormously more than the amount of power that we could plausibly use: see later. <a href="#2015-10-10-melting-the-antarctic-ice-sheet-footnote-3-return">↩</a></p></li>
<li id="2015-10-10-melting-the-antarctic-ice-sheet-footnote-4-definition" class="footnote-definition">
<p>This is just a number I have pulled out of thin air: one percent seems too high, so perhaps a tenth of one percent is plausible. <a href="#2015-10-10-melting-the-antarctic-ice-sheet-footnote-4-return">↩</a></p></li></ol></div>Greenspunningurn:https-www-tfeb-org:-fragments-2015-10-08-greenspunning2015-10-08T15:16:56Z2015-10-08T15:16:56ZTim Bradshaw
<p>Three approaches to solving problems on computers.</p>
<!-- more-->
<p>When faced with a computational problem there are three common approaches:</p>
<ol>
<li>write a program to solve the problem;</li>
<li>write a tool to solve the problem and other problems of the same kind;</li>
<li>write a programming language in which you can then write tools which solve problems of the same, and other, kinds.</li></ol>
<p>Most people start by doing the first. Bradshaw’s corollory to <a href="https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule">Greenspun’s tenth law</a> states:</p>
<ol>
<li>for problems of size \(s \ge s_1\), then, regardless of the initial approach, the final result is as if the third approach had been taken, even if this is not understood by the people solving the problem;</li>
<li>there is a problem size \(s_0\) above which it is most efficient to take the third approach from the beginning;</li>
<li>\(s_0 \lt s_1\).</li></ol>
<p>What this means is that, if you have a sufficiently large problem (\(s \ge s_1\)) to solve then, whatever your intentions, you will inevitably end up creating a programming language as part of the solution. And there is a range of problems smaller than this (\(s \in (s_0, s_1)\)) for which the <em>quickest</em> way to solve the problem is to design and implement a programming language.</p>
<p>So, when approaching a problem, it is important to understand the values of \(s_0\) & \(s_1\) and how they compare to \(s\). These values are hard to discover: a good trick is to start with a platform which makes \(s_0\) very small and always take the third approach.</p>Black body planeturn:https-www-tfeb-org:-fragments-2015-09-30-black-body-planet2015-09-30T14:51:09Z2015-09-30T14:51:09ZTim Bradshaw
<p>A model of the planets as black bodies is surprisingly accurate, except in one interesting case<sup><a href="#2015-09-30-black-body-planet-footnote-1-definition" name="2015-09-30-black-body-planet-footnote-1-return">1</a></sup>.</p>
<!-- more-->
<h2 id="in-theory">In theory</h2>
<p>A <a href="https://en.wikipedia.org/wiki/Black_body">black body</a> is an ideal object which absorbs all radiation which falls on it, and then reemits it as thermal radiation. Real objects are, of course, not black bodies, but they are often surprisingly close. One nice thing about black bodies is that there are is a nice equation which relates the amount of power they radiate to their temperature:</p>
<p>\[ \frac{P}{A} = \sigma T^4 \]</p>
<p>Where \(P\) is power, \(A\) is area and \(\sigma\) is the <a href="https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_constant">Stefan-Boltzmann constant</a> which is \(\sigma \approx 5.67\times10^{−8}\,\mathrm{Wm^{-2}K^{−4}}\): this formula tells you the power per unit area that a black body radiates, at a given temperature.</p>
<p>Well, obviously you can derive this formula:</p>
<p>\[ T = \left(\frac{P}{A\sigma}\right)^{1/4} \]</p>
<p>In other words temperature goes as the fourth root of power.</p>
<p>If you consider a ball of radius \(r\) then its surface area is \(A = 4\pi r^2\), so a perfectly spherical black body of radius \(r\) at a uniform temperature \(T\) radiates a total output power \(P_O\)</p>
<p>\[ P_O = 4\pi r^2 \sigma T^4 \]</p>
<p>or equivalently</p>
<p>\[ T = \left(\frac{P_O}{4\pi r^2 \sigma}\right)^{1/4} \]</p>
<p>OK, so consider a planet, which is a perfect black-body, orbiting at a radius \(R\) from the Sun<sup><a href="#2015-09-30-black-body-planet-footnote-2-definition" name="2015-09-30-black-body-planet-footnote-2-return">2</a></sup>. Let the input power flux from the Sun, at the point directly facing the Sun, be \(S\). For Earth, \(S \approx 1360\,\mathrm{Wm^{-2}}\). We can calculate two things from this.</p>
<p>The total output power of the Sun, \(P_S\) is given by the integral of \(S\) over a sphere of radius \(R\):</p>
<p>\[ P_S = 4\pi R^2 S \]</p>
<p>The total power falling on the planet, \(P_I\), is given by the integral of \(S\) over the surface of the disk of the planet which faces the Sun, and this is</p>
<p>\[ P_I = \pi r^2 S \]</p>
<p>So the first of these equations can be used to work out \(S\) in terms of \(P_S\) and then substituted into the second one:</p>
<p>\[ P_I = P_S\frac{r^2}{4 R^2} \]</p>
<p>If the planet is at equilibrium, then \(P_O = P_I\) or, in other words, output and input power is the same<sup><a href="#2015-09-30-black-body-planet-footnote-3-definition" name="2015-09-30-black-body-planet-footnote-3-return">3</a></sup>. So</p>
<p>\[ T = \left(\frac{P_S}{16\pi \sigma R^2}\right)^{1/4} \]</p>
<p>Finally, given \(S\) and \(R\) we can work out \(P_S\), and for our Sun, based on \(R \approx 1.50\times 10^{11}\,\mathrm{m}\) for Earth we get \(P_S \approx 3.85\times 10^{26}\,\mathrm{W}\).</p>
<h2 id="in-practice">In practice</h2>
<p>So, we can compute the surface temperatures for the rocky planets, assuming they were black bodies.</p>
<p><strong>Earth.</strong> \(R \approx 1.50\times 10^{11}\mathrm{m}\), giving \(T \approx 278\,\mathrm{K}\) or \(5^\circ\mathrm{C}\). Actual mean surface temperature is \(287\,\mathrm{K}\) or \(14^\circ\mathrm{C}\): this is reasonably accurate.</p>
<p><strong>Mercury.</strong> \(R \approx 5.79\times 10^{10}\,\mathrm{m}\) giving \(T \approx 448\,\mathrm{K}\) or \(175^\circ\mathrm{C}\). Actual mean surface temperature is \(452\,\mathrm{K}\) or \(179^\circ\mathrm{C}\). This is also OK.</p>
<p><strong>Mars.</strong> \(R \approx 2.28\times 10^{11}\,\mathrm{m}\), giving \(T \approx 226\,\mathrm{K}\) or \(-47^\circ\mathrm{C}\). Actual mean surface temperature is \(226\,\mathrm{K}\) or \(-47^\circ\mathrm{C}\). This is pretty spooky: it has no right to be this good, and is probably only this good by chance.</p>
<p><strong>Pluto.</strong> \(R \approx 5.91\times 10^{12}\,\mathrm{m}\), giving \(T \approx 44\,\mathrm{K}\) or \(-229^\circ\mathrm{C}\). Actual mean surface temperature is \(44\,\mathrm{K}\) or \(-229^\circ\mathrm{C}\). The same is true for this: it is probably only this good by chance.</p>
<p><strong>Venus.</strong> \(R \approx 1.08\times 10^{11}\,\mathrm{m}\), giving \(T \approx 328\,\mathrm{K}\) or \(55^\circ\mathrm{C}\). Actual mean surface temperature is \(730\,\mathrm{K}\) or \(460^\circ\mathrm{C}\). This is hopeless, as you would expect it to be.</p>
<p>These are astonishingly accurate, except for Venus, which is out by a factor of nearly two: Venus is the hottest planet in the Solar system because of a runaway greenhouse effect.</p>
<h2 id="notes">Notes</h2>
<p>Data came from <a href="http://www.wolframalpha.com/">Alpha</a> which, while I have considerable qualms about Wolfram products, is a reasonably good source of this sort of thing. I have been very casual about rounding, generally rounding to integers for both \(\mathrm{K}\) and \({}^\circ\mathrm{C}\). The temperatures don’t come with an uncertainty but I suspect that for Venus is less accurate than the others. I didn’t deal with the gas giants because they don’t have well-defined surfaces and I was just too lazy.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2015-09-30-black-body-planet-footnote-1-definition" class="footnote-definition">
<p>This is mostly an experiment with maths in frog. The conclusion is that it’s possible, but does not look great: I will stick to LaTeX. <a href="#2015-09-30-black-body-planet-footnote-1-return">↩</a></p></li>
<li id="2015-09-30-black-body-planet-footnote-2-definition" class="footnote-definition">
<p>Assume also the planet is distant from the Sun so we don’t need to worry about whether light leaks around the side of the planet and so on. <a href="#2015-09-30-black-body-planet-footnote-2-return">↩</a></p></li>
<li id="2015-09-30-black-body-planet-footnote-3-definition" class="footnote-definition">
<p>This assumes that the planet has a constant surface temperature, which will be true only if it conducts heat perfectly, but turns out to be a good enough approximation. <a href="#2015-09-30-black-body-planet-footnote-3-return">↩</a></p></li></ol></div>Fog computingurn:https-www-tfeb-org:-fragments-2015-07-23-fog-computing2015-07-23T09:57:01Z2015-07-23T09:57:01ZTim Bradshaw
<p>Fog computing is like cloud computing except that no-one can see what you are doing.</p>
<!-- more-->
<h2 id="a-basket-of-eggs">A basket of eggs</h2>
<p>Here is an interesting quote from the website of a company which provides an ‘enterprise content collaboration platform’:</p>
<blockquote>
<p>80% of central government departments use [our system], making it the most trusted cloud-collaboration solution for UK government and public sector organisations.<sup><a href="#2015-07-23-fog-computing-footnote-1-definition" name="2015-07-23-fog-computing-footnote-1-return">1</a></sup></p></blockquote>
<p>There are several ways of understanding this.</p>
<p><strong>What they want you to think.</strong> ‘Gosh, all these government people will be very fussy about security and extremely competent, and we’re a big corporate/government type place too: we should be using this product ourselves.’</p>
<p><strong>What Dr. Evil is thinking.</strong> ‘80% of UK central government departments are using these people? That’s a lot of data that I am sure my customers would be willing to pay a great deal for, all in one place. Minions: to your keyboards!’</p>
<p><strong>What President Evil is thinking.</strong> ‘80% of UK central government departments are using these people? That fool Dr. Evil is probably wasting a lot of effort trying to break in to sell me the data. Minions: buy that company for me!’</p>
<p><strong>What the government is thinking.</strong> ‘Minions: another bottle! And send up another boy: I seem to have broken this one.’</p>
<h2 id="the-desert-of-the-real">The desert of the real</h2>
<p>We all like to talk about ‘the cloud’ as if it is something new, but it isn’t: all it is is centrally-managed and outsourced storage and processing of our data. The only new thing about this is the outsourcing, and that’s not very new.</p>
<p><strong>Central management</strong> holds out the hope of saving money and improving security, but means that there is a single point of failure: if the system fails then it fails for everyone, and if it is compromised then it is compromised for everyone. Information can also leak between regions which should be isolated from each other: in particular a hostile user who succeeds in compromising the system can obtain other users’ information.</p>
<p><strong>Outsourcing</strong> means that small organisations or individuals don’t have to have expertise in data management but can rely on an external provider to do it for them. Large organisations may think they can save money by outsourcing and occasionally they can. Outsourcing means you are protected only by <a href="../../../../2015/03/14/contracts/">a contract</a> and lose direct control over the system: this is fine so long as you are sure that the provider is honest, competent, and not subject to a malevolent legislative framework. Well, they may at least be honest.</p>
<p>The thing that makes the economics of cloud computing work is that there will be a relatively small number of relatively large specialist providers who can become really expert at providing these services and exploit economies of scale to make doing so cheap<sup><a href="#2015-07-23-fog-computing-footnote-2-definition" name="2015-07-23-fog-computing-footnote-2-return">2</a></sup>. Unfortunately this is also what makes cloud computing dangerous: if a lot of sensitive data is centralised in a small number of organisations this is like painting targets on the backs of those organisations. Anyone who is interested in that data — bad people, governments (are they different than bad people?) and competitors — will stand to gain enormously by compromising cloud providers.</p>
<p>Of course, they will tell you how secure they are, and imply that they can never be compromised like this. If you believe that you can stop reading now.</p>
<h2 id="obscured-by-clouds">Obscured by clouds</h2>
<p>So let’s assume that you don’t trust your cloud service providers and you care about your data: Can you still make use of them? The answer is that you can in limited but, I think, still useful ways.</p>
<p>There are two assumptions that you must not make:</p>
<ul>
<li>don’t assume the cloud provider is reliable — your data and any associated services can vanish at any time and that must not be catastrophic;</li>
<li>don’t assume the cloud provider can be trusted — assume that either they are themselves not trustworthy, or that they have been compromised, legally or illegally, and that anything you store or process there is visible to bad people as a result.</li></ul>
<p>It’s fairly easy to deal with the first point: if the data might go away you need to make sure that you have other copies of it, and ideally copies that you have full control over. Similarly with services: make sure you can survive if things go away.</p>
<p>The second case is harder. If you can’t trust your provider what use are they? Well, still some use. In particular, if all the data that you store on the cloud is encrypted <em>and the encryption keys are not available to the provider</em> then, even if bad people get access to this data there is rather little that they can so with it: it’s just a huge blob of meaningless bits to them. To decrypt the data they must attack your systems, where the encryption keys are held.</p>
<p>Encrypting data like this fairly seriously limits what can be done with the data in the cloud: in fact all that can be done with it is to ship it to from clients and store it in the meantime. No kind of processing which depends on the content of the data can be done at all on the provider’s systems. For many purposes this is a less crippling restriction than it seems: globally-available storage is quite a useful thing to have, in its own right.</p>
<p>For instance, a government agency might want to keep sensitive documents in the cloud: it can do this quite happily so long as the documents are always encrypted before they leave the client with keys which <em>also</em> never leave the client. To edit a document it is fetched, decrypted, edited and encrypted again on the client, and then sent back to the cloud<sup><a href="#2015-07-23-fog-computing-footnote-3-definition" name="2015-07-23-fog-computing-footnote-3-return">3</a></sup>.</p>
<p>What a system like this <em>can’t</em> do, by design, is process data in the cloud in any way which depends on its content: if you want, say, a shared calendar with server-side appointment management then you can’t have it, because it requires the server to be able to see the content of the data.</p>
<h2 id="the-illusion-of-security">The illusion of security</h2>
<p>Cloud service providers are very anxious to tell you how secure they are: they will use terms like ‘encrypted at rest’, ‘AES–256’, and ‘military-grade security’, all of which signify nothing. There are only two questions that matter:</p>
<ol>
<li>do they have the keys to whatever encryption system they are using?</li>
<li>if they do, are you and they the same person?</li></ol>
<p>If the answer to the first of these is true, then the answer to the second must also be true: if it’s not then you should not trust them. Yes, they might mean well, and they might even be competent, but even if they are they can be subject to attacks which they will not be able to withstand: when the people who won’t say who they work for come calling with their bit of paper then the keys <em>will</em> be handed over and they <em>won’t</em> tell you that this as happened.</p>
<p>The only way that your data is safe is if you put it in a box to which <em>only you</em> have the key<sup><a href="#2015-07-23-fog-computing-footnote-4-definition" name="2015-07-23-fog-computing-footnote-4-return">4</a></sup>, and that means that <em>you</em> must encrypt it with keys you control and live with the consequences of that.</p>
<h2 id="in-the-fog">In the fog</h2>
<p>Fog computing is more-or-less this: it is the use of cloud-based shared storage to share data which is encrypted and decrypted only on the client, providing the possibility of real security rather than the illusion of it that cloud providers currently offer.</p>
<p>One good thing about fog computing is that you can implement it yourself: you do not need to rely on a provider offering the service. A tool which encrypts data on the client can sit on top of any kind of cloud storage provider. This is, indeed, beginning to happen: there are backup tools (notably <a href="https://www.arqbackup.com/">Arq</a>) which do this client-side encryption and can indeed be configured to sit on top of many different cloud storage providers.</p>
<p>However even encrypting the data like this is not really enough. The bad people can still look at your patterns of access and (if you are not careful to obscure it) metadata such as file names and deduce more than you would like: for instance they can work out who you talk to by noticing who else accesses your data, and so on. This can be avoided by obfuscating these acces patterns but it is much harder to do. But just encrypting the data with keys you control is a big step in the right direction.</p>
<p>Fog computing is inherently limited: since the data in the cloud is entirely opaque, no useful computation can be done with it there. You can not have shared calendars with conflict detection in the cloud, you can not edit documents which live entirely in the cloud, and so on. But it is, or can be, secure, and if you care about security this is what you should be doing.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2015-07-23-fog-computing-footnote-1-definition" class="footnote-definition">
<p>The quote is current at the time of writing, but edited to remove names. <a href="#2015-07-23-fog-computing-footnote-1-return">↩</a></p></li>
<li id="2015-07-23-fog-computing-footnote-2-definition" class="footnote-definition">
<p>If you are a large enough organisation to get computers custom-made to your designs then you can make them very cheap, and some cloud providers do just that. Almost all of them will be building custom datacentres. <a href="#2015-07-23-fog-computing-footnote-2-return">↩</a></p></li>
<li id="2015-07-23-fog-computing-footnote-3-definition" class="footnote-definition">
<p>Documents which are not sensitive or which should be public can of course be left in plain text in the cloud. <a href="#2015-07-23-fog-computing-footnote-3-return">↩</a></p></li>
<li id="2015-07-23-fog-computing-footnote-4-definition" class="footnote-definition">
<p>And even then the shabby people with their bits of paper and police escort can come calling, but at least you will know they have called, which is the best you can hope for. <a href="#2015-07-23-fog-computing-footnote-4-return">↩</a></p></li></ol></div>Covariance and contravarianceurn:https-www-tfeb-org:-fragments-2015-07-21-coco2015-07-21T16:22:16Z2015-07-21T16:22:16ZTim Bradshaw
<p>Physicists seem still to be taught about tensors as being, essentially, multidimensional arrays with special transformation rules which must be learned by rote. So I wrote a document which tries to present a more useful approach.</p>
<!-- more-->
<p>The aim of this document was to present a more modern, ‘geometrical’ approach, while not requiring too much background in differential geometry. I don’t suppose anyone read it when I posted pointers to it on reddit, and certainly no-one will read it now, but <a href="/texts/2015/coco.pdf">here it is</a><sup><a href="#2015-07-21-coco-footnote-1-definition" name="2015-07-21-coco-footnote-1-return">1</a></sup>.</p>
<hr />
<div class="footnotes">
<ol>
<li id="2015-07-21-coco-footnote-1-definition" class="footnote-definition">
<p>This URL might change (and has already changed: it used to be on Dropbox). <a href="http://www.tfeb.org/fragments/2015/07/21/coco/">This post itself</a> is a better link to remember as I will update the pointer if I move the document. <a href="#2015-07-21-coco-footnote-1-return">↩</a></p></li></ol></div>Contractsurn:https-www-tfeb-org:-fragments-2015-03-14-contracts2015-03-14T15:52:45Z2015-03-14T15:52:45ZTim Bradshaw
<p>Do not eat the free lunch: it has probably been poisoned.</p>
<!-- more-->
<p>On 2015–03–12, <a href="http://google-opensource.blogspot.co.uk/2015/03/farewell-to-google-code.html">Google announced the closure of Google Code</a>, the latest in a succession of services they have switched off over the last few years. This is a perfectly reasonable thing for them to do: they are a commercial organisation and need to focus on the things that make them money — selling advertising and acquiring as much personal data as possible from users of their services to help them do that — and hosting source code repositories is probably not a very efficient way of scraping such data off people.</p>
<p>So there is no reason to complain about this, however annoying it is: it was a service that was being offered for free, after all. But of course, a number of people will be significantly inconvenienced when things like this go away because they have come to rely on them, either personally or as part of their business: this turns out not to have been the smartest idea. The interesting question is whether they will learn from the experience and what they’ll do to stop it happening again.</p>
<h2 id="too-cheap-to-meter">Too cheap to meter</h2>
<p>The cost of many things related to computers and networking has fallen dramatically over time, and continues to fall. We’ve also found out that more things are related to computers and networking than we realised: music, still and moving images, books and so on. In particular the <a href="https://en.wikipedia.org/wiki/Marginal_cost">marginal cost</a> — the cost of making an additional copy of something — has often become extremely low because the cost of storing and moving data around has become very low indeed.</p>
<p>It’s quite tempting to think that ‘very small’ is the same as ‘zero’<sup><a href="#2015-03-14-contracts-footnote-1-definition" name="2015-03-14-contracts-footnote-1-return">1</a></sup>, but this is a fatal mistake: if it costs <em>nothing</em> to do something then it costs nothing to do an arbitrary amount of it, while it it merely costs a very small amount then you can make the cost arbitrarily large by doing enough of it. If something with a non-zero cost, however small, is given away for <em>no</em> cost then the giver is in a dangerous situation: nothing is too cheap to meter unless it is free<sup><a href="#2015-03-14-contracts-footnote-2-definition" name="2015-03-14-contracts-footnote-2-return">2</a></sup>, and nothing is completely free. So if an organisation is giving away a service ‘for free’ there is reason to be suspicious: either things are what they seem, in which case they are going to run out of money at some point and disappear, or things are not what they seem.</p>
<p>If things are what they seem there’s a fairly obvious problem: you probably don’t want to build anything substantial around a service which is inevitably going to evaporate when the organisation providing it falls off a cliff.</p>
<p>Things are more interesting when they are not what they seem: how is the organisation making money if they’re providing something for free?</p>
<h2 id="the-first-hit-is-free">The first hit is free</h2>
<p>One approach is the one traditionally used by people who sell recreational drugs: you get a free taste of the service, but the taste will be limited in ways which make it annoying to use and probably will prevent you from doing some things altogether. Eventually, all being well, you become both dependent on whatever it is they are pushing and frustrated with the limitations of the free version and decide to pay for the unrestricted version.</p>
<p>There is nothing very wrong with this approach: you’re getting something for free, after all: just not what you really wanted. And you have the option of paying for that if you choose to: that’s what the supplier wants you to do, after all. This is not, however, a very good long-term solution: the supplier could always simply stop offering the limited version or, worse, stop offering any version at all.</p>
<h2 id="the-place-where-there-is-no-darkness">The place where there is no darkness</h2>
<p>Another approach is one you might associate with a person wearing suspiciously well-cut clothes that you once met late at night at a crossroads somewhere in the deep south. Now you can play the guitar pretty well, but can you remember just what it was that you you bargained for your new talent and when the debt will become due?</p>
<p>This is not the sort of bargain you want to make<sup><a href="#2015-03-14-contracts-footnote-3-definition" name="2015-03-14-contracts-footnote-3-return">3</a></sup>. But it is exactly this sort of bargain on which a lot of large companies have built their businesses: they provide you with some service, and in return you provide them with your soul, which they then package with a lot of other souls and sell on to you know not whom. They’re not, in fact, interested in providing the service: they’re in the soul collection and resale business.</p>
<p>A lot of people quite clearly think this is all just fine. They’re quite happy to trade their souls for an endless set of distractions: perhaps the point of the distractions is so they don’t realise just what it is they’ve lost and what exactly it was they gained in return if anything; or perhaps they have souls which are not very valuable and the bargain is a perfectly reasonable one, after all.</p>
<p>There is worse. When you met someone late at night to make this sort of bargain, you made very sure that you got a bit of paper with signatures on it detailing just exactly what the deal was<sup><a href="#2015-03-14-contracts-footnote-4-definition" name="2015-03-14-contracts-footnote-4-return">4</a></sup>. That’s not how the deals that are made so willingly now work: you get something momentarily useful or amusing, and in return you irrevocably give away something of yourself, and that’s as far as it goes. If, later, it becomes convenient for the entity you did the deal with to stop providing whatever entertainment it was, then one day it simply goes away and you have bargained your soul for air and darkness, and precious little of that.</p>
<h2 id="better-living-through-chemistry">Better living through chemistry</h2>
<p>The answer is quite conventional. If there is something you want and on which you might come to rely, then you <em>sign a contract for it</em>: a document which obliges you to pay for it, and in return obliges the provider to actually provide the service.</p>
<p>Contracts really do three things.</p>
<ul>
<li>They make it clear what exactly is being bought and sold, and avoid the ‘too cheap to meter’ fallacy I talked about above: the contract should detail what you get and what the limits on it are — how much bandwidth or storage you can use for instance — and what you are paying for it, which should generally not be ‘your immortal soul’.</li>
<li>They ensure that the interests of the consumer and the provider are the same, or at least similar: the consumer wants a service or a product that works well, and the provider gets paid if they provide that.</li>
<li>They specify what happens if the contract is terminated: what the responsibilities of each party are and what they are not. For instance the organisation providing your cloud storage might be obliged to give you a way to recover your data.</li></ul>
<p>The second point is particularly important: for a contract to be of any use at all <em>both parties have to get something out of it</em>: you can sign a contract with someone to provide you some service for free, but if they decide to stop doing that what are you going to do — perhaps you could ask them for your money back?</p>
<p>But, well, this <em>is</em> a very conventional and rather boring answer: surely we all live in a future where all this awful tedium is no longer needed. Wasn’t the internet meant to do away with all that? What happened to the gift economy? Are there no flying cars, after all? Sadly, no, the internet didn’t change all that: it simply enabled a collection of large corporations with toxic business models to fool a really large number of people. There are no flying cars.</p>
<p>On 2015–07–16 <a href="http://www.theregister.co.uk/2015/07/17/souceforge_titsup/">SourceForge fell over</a>: perhaps it will recover, this time. Once upon a time it was the bright future of source code hosting: who knows what will be lost when it finally goes away?</p>
<hr />
<div class="footnotes">
<ol>
<li id="2015-03-14-contracts-footnote-1-definition" class="footnote-definition">
<p>It is particularly tempting to people who want to make the argument that ‘no harm is done to the artists if I just download this song, because it costs nothing for them to deliver an extra copy: they have already been paid’. I am not sure if this argument is ever made in good faith, but it’s very easy to see that it doesn’t work by <em>reductio ad absurdam</em>: what would happen if <em>everyone</em> made it? However I don’t want to get sidetracked by that here. <a href="#2015-03-14-contracts-footnote-1-return">↩</a></p></li>
<li id="2015-03-14-contracts-footnote-2-definition" class="footnote-definition">
<p>‘Metering’ may be simply restriction of supply — for instance a limit to the amount of data you can transfer, which may not seem like metering although it is. In the limiting case the limit may be the physical capacity of the system: you can only transfer so much data per month over link with a given bandwidth. I suspect that the original ‘too cheap to meter’ claim was made based on this assumption for domestic electricity usage (if it was ever really made at all). <a href="#2015-03-14-contracts-footnote-2-return">↩</a></p></li>
<li id="2015-03-14-contracts-footnote-3-definition" class="footnote-definition">
<p>Well, perhaps it is a bargain worth making, but probably not in exchange for anything related very closely to computers. <a href="#2015-03-14-contracts-footnote-3-return">↩</a></p></li>
<li id="2015-03-14-contracts-footnote-4-definition" class="footnote-definition">
<p>Perhaps in the hope of later renegotiation, although it generally seemed to turn out that the counterparty had rather better negotiation skills than you and, obviously, expensive lawyers with dead eyes. <a href="#2015-03-14-contracts-footnote-4-return">↩</a></p></li></ol></div>Road wornurn:https-www-tfeb-org:-fragments-2015-02-24-road-worn2015-02-24T23:17:38Z2015-02-24T23:17:38ZTim Bradshaw
<p>I play the guitar. Something that has been fashionable for some time is what are often called ‘road worn’ guitars. In other words new (but vintage-spec) guitars which have been aggressed in various ways to make them look old.</p>
<!-- more-->
<p>This is, of course, because everyone wants to have put in the hours in sweaty clubs to have Rory Gallagher’s Strat, or Pearly Gates or Old Black, but not all of us actually have done that. So instead we buy beautifully-made simulcra which we casually place, next to our reproduction hand-wired Plexi stack with its oh-so-carefully torn speaker cloth, in our Manhatten loft or London flat. And anyone who doesn’t look too closely might perhaps believe that, before our second career in finance, we did indeed put in the hours in the sweaty clubs. Perhaps, in fact, we are Jimmy Page? Perhaps, after a few drinks and lines, we might even believe it ourselves? Certainly we would not want to be seen as the sort of person who owns a new Les Paul, still less a new Leica, because what sort of people buy those? Rich men (yes, men) who work in finance and who when the revolution comes will, if we are lucky, be the first up against the wall but will more probably be impaled on spikes to await being eaten by nameless tentacled horrors (if you still believe the revolution will involve people rushing around waving flags and building barricades rather than ancient horrors leaking in from other dimensions I have news for you: you’re in thw wrong universe). People, in other words, like us.</p>
<p>The people who buy these things are indeed richly deserving of their inevitable horrible fate. Malcolm Gladwell may be wrong about many things, but he’s right about the need to put in the hours: your guitar needs to be worn because you have worn it. But there is slighly more to this than there might first appear to be.</p>
<p>Something that musicians have understood for a long time is that certain old instuments and equipment really were pretty special. The Les Pauls that were made in the late 50s were pretty astonishing instruments, as were some of the amplifiers made in the following two decades. It’s not quite so acceptable to say that the loving reproductions (not the investment banker’s road-worn ones) that have been made since are, in many cases, as good or better than the originals.</p>
<p>Photographers have not really understood this yet, I think. We still believe that a sharper lens and more pixels are somehow going to result in a better photograph. Even those of us who prefer vintage equipment (whether it is in fact vintage or simply unchanged) have to argue that film has ‘more dynamic range’ or ‘more resolution’: perhaps, once, this was true. We need to grow up: would HCB’s pictures be better if he had had more pixels and a sharper lens? If you have <a href="http://www.peterturnley.com/french-kiss">Peter Turnley’s excellent book of Paris photographs</a> do you really think the digital pictures — which unquestionably are sharper and higher resolution — are better than the film pictures in any way at all?</p>
<p>My old Hammond has noisy keyswitches in the same way that Tri-X has grain and old lenses have abberations, and <em>that’s what makes them great</em>.</p>The electron at the edge of the universeurn:https-www-tfeb-org:-fragments-2015-02-22-the-electron-at-the-edge-of-the-universe2015-02-22T22:05:03Z2015-02-22T22:05:03ZTim Bradshaw
<p>How easy are physical systems to predict?</p>
<!-- more-->
<p>Well, here are a couple of rather lovely examples. Both of these are due to Michael Berry and are mentioned in a book called ‘A Passion for Science’ which is in fact a set of collected transcripts of BBC radio programmes from sometime in the mid 1980s: I heard them on the radio originally, and they have stayed with me — I didn’t find the paper versions until quite recently. I’m giving these from memory: they might differ slightly from the versions described in the book.</p>
<p>For both of them imagine a universe where everything is completely Newtonian, so no quantum mechanics in particular. There is Newtonian gravity.</p>
<h2 id="billiards">Billiards</h2>
<p>The first case is billiards, and we’ll consider a completely idealised billiard table: completely smooth, flat and rigid, completely round balls with completely known properties (so how elastic they are etc), and the same for the cushions. Now someone makes a shot, and we either know the direction and force exactly or are allowed to measure the cue ball’s position and velocity exactly shortly after the shot. We don’t know one thing: there are some people standing around the table, and we don’t know where they are, so we don’t know what their gravitational fields look like. Now we want to predict where the balls go, and we’ll say that the prediction fails when a ball leaves a collision 90 degrees from where we predicted — it’s obvious that after that point we can’t usefully predict anything. How many collisions can we predict ahead?</p>
<h2 id="the-electron-at-the-edge-of-the-universe">The electron at the edge of the universe</h2>
<p>The second case is an ideal gas: a lot of little ideal particles in an ideal box. Again we know everything: the starting conditions are known completely, the box is completely understood &c &c. The box insulates everything but (Newtonian) gravity to make things simpler. And this time we also know everything about the rest of the universe as well: we don’t need to predict it forward, we’re just given all the data about how it evolves (in fact I think that without loss of generality we can assume an empty universe outside the box, which reduces the data volume considerably). Except that there’s an electron at the edge of the universe and we don’t know where it is (apart from how far away it is), and so again we don’t know its gravitiational field. Now we want to predict this system forwards and we’ll use the same criterion for failure: when some particle leaves a collision 90 degrees out from where we predict. How many collisions before that happens?</p>
<hr />
<p>The answers are seven or eight for the first case, and about fifty for the second.</p>Rumours of my deathurn:https-www-tfeb-org:-fragments-2015-02-01-rumours-of-my-death2015-02-01T20:54:34Z2015-02-01T20:54:34ZTim Bradshaw
<p>When I first used Lisp, the common refrain was that Lisp was dead.</p>
<!-- more-->
<p>There was a single free implementation of CL (which required you to physically sign a license of some kind and return it, in exchange for a tape) which was deficient in many respects. The two or three commercial implementations cost about a year’s salary each. Enormous effort had been spent on implementations which ran on special hardware. One variant of these cost more than your house: the other rather less, but turned out to have been implemented by the fey — you seriously did not want to spend too much time with it if you did not want problems involving having your firstborn somehow changed into a strange and somehow <em>absent</em> creature.</p>
<p>(And there was a terrible, unspeakable truth about even the expensive hardware: the people who implemented it didn’t understand computer performance very well with the result you would expect. The systems were faster than a VAX, but <em>everything</em> was faster than a VAX, including some PDP–11s. A Sun 3/260 ate them alive, and you could buy several of those for the cost of a house, with bundled licenses.)</p>
<p>Performance was pretty grim: of course nothing was fast on machines that, on a good day, could execute a few million instructions a second, but Lisp implementations were problematic at best. You spent a lot of time turning recursive code into iterative code by hand and writing macros (no inlining) to get performance to be reasonable and worrying about the primitive garbage collectors.</p>
<p>There was no standard: existing implementations differed in basic details like error handling (not in the aluminium book) and a standard object system was a distant dream. The news from the standards committee was ominous: the special-hardware people were exerting pressure and there were serious worries that the object system would not be efficiently implementable on stock hardware. The language was going to be huge.</p>
<p>Standard or semi-standard libraries were not really thought of.</p>
<p>Everyone knew Lisp was dead: the coming thing was, perhaps, Scheme — tail-call elimination <em>in the language</em>, a small language (yet MIT Scheme somehow had a bigger footprint than the CLs we used) — or C++ or some functional language whose name no-one now remembers. But Lisp was dead: no question about it.</p>
<hr />
<p>Fast forward.</p>
<hr />
<p>I have two high-quality CL implementations on my machine and one Scheme-derived system, also of very high quality, which created this blog: I have long ago stopped counting the number of good-quality free implementations. One of the implementations I use is commercial: the annual support is about 10% of my monthly rent. I can run dozens of instances of each without the machine noticing, and I could happily run a full CL development system on a system less powerful and smaller than my phone. Performance is a solved problem: yes, highly-optimised code is, perhaps, slower than optimised C or Fortran but since almost all performance problems are design problems no-one older than about 19 cares any more. CL has an advanced, performant and standard object system and, in effect, a standard metaobject system as well. The library problem has been solved by Quicklisp and a large number of good-quality standard libraries. I am still using code I wrote over twenty-five years ago with essentially no modification: meanwhile the Python code I wrote ten years ago is long rendered obsolete by gratuitous changes in the language (the Perl code I wrote at the same time is doing fine, however).</p>
<p>And yet still the cry goes up: Lisp is dead; Lisp is dead.</p>Macros in Racket, part twourn:https-www-tfeb-org:-fragments-2015-01-28-macros-in-racket-part-two2015-01-28T19:31:18Z2015-01-28T19:31:18ZTim Bradshaw
<p>The second part of my notes on writing macros in Racket.</p>
<!-- more-->
<p>This is the second part of at least three: the first part is <a href="../../../../2015/01/13/macros-in-racket-part-one/">here</a>, and the third part is <a href="../../../../2015/12/12/macros-in-racket-part-three/">here</a>. This won’t make much sense unless you’ve read that. As before I make no claims to be an expert in Racket’s macro system although I am familiar with Lisp macros in general: this is just some more notes I wrote while learning it.</p>
<h2 id="the-unwashed-lisp-hackers-version-of-collecting">The unwashed Lisp hacker’s version of <code>collecting</code></h2>
<p>So, we can write <code>clet</code>: can we write <code>collecting</code>? Yes, we can:</p>
<pre><code>(require (for-syntax racket/list))
(define-syntax (collecting stx)
(datum->syntax
(quote-syntax collecting)
`(let ([r '()])
(define (,(datum->syntax stx 'collect) it)
(set! r (cons it r)) it)
,@(rest (syntax->list stx))
(reverse r))))</code></pre>
<p>This works because, in the internal definition of <code>collect</code>, we’ve intentionally given it a name which uses the context of the syntax object we’re transforming, not the context of the macro. It’s easy to confirm that this works the way you would expect, and in particular that it’s safe in both directions: for instance</p>
<pre><code>> (let ((reverse (λ (x) x)))
(collecting (collect 1) (collect 2)))
'(1 2)</code></pre>
<p>shows that the binding of <code>reverse</code> when the macro is called has not ‘infected’ the macro definition.</p>
<p>It seems as if that should be all you need: so long as you are careful about which context you choose, and you make sure that the ‘default’ context is the one from the macro not from where it is used. In fact it isn’t, quite: see <a href="#macro-composition">below</a>. However even if it were, it’s clearly a pain to write macros this way.</p>
<h2 id="pattern-matching">Pattern matching</h2>
<p>Pretty much all macros do two things:</p>
<ol>
<li>deconstruct their arguments in some more-or-less complicated way, but almost always in a way which is significantly more complicated than anything that needs to be done for the arguments of a function;</li>
<li>construct a form which is the result of the macro and which, again, may be complicated.</li></ol>
<p>The beauty of traditional Lisp macros is that since the arguments and results of the macro were just what the reader spat out — lists and symbols and so on — and since Lisp was kind of good at doing things to these structures as it was designed for that, and finally since the whole power of the language was available in the macro, this was not horrible even without special tools, although it was not particularly pleasant for complicated macros.</p>
<p>Hygienic macros make this much less pleasant because the objects that need to be deconstructed and constructed are now opaque syntax objects, and there is additional worrying about context to do. The answer to this is to provide special tools which do the boring bits for you: this makes everything simpler, at the cost of making it still more opaque what is actually happening. In almost all cases that’s a tradeoff worth making. Pattern matching is also a fashionable thing amongst the young and hip, of course.</p>
<p>The way this is done in Racket is via <code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/stxcase-scheme..rkt)._syntax-case))" style="color: inherit">syntax-case</a></code>, its slightly simpler friend <code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/stxcase-scheme..rkt)._syntax-rules))" style="color: inherit">syntax-rules</a></code>, and by <code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/stxcase-scheme..rkt)._syntax))" style="color: inherit">syntax</a></code> and variants on it.</p>
<p><code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/stxcase-scheme..rkt)._syntax-case))" style="color: inherit">syntax-case</a></code> takes a bit of syntax and matches it against patterns, binding matches, which can then be used in <code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/stxcase-scheme..rkt)._syntax))" style="color: inherit">syntax</a></code> forms lexically within it to return syntax objects, whose context is that of the <code>syntax-case</code> form (so hygienic). There is syntactic sugar for <code>syntax</code>: <code>(syntax ...)</code> can be written <code>#'...</code> in the same way that <code>(quote ...)</code> can be written <code>'...</code>. There is also <code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/qqstx..rkt)._quasisyntax))" style="color: inherit">quasisyntax</a></code> which works the same way as <code><a href="http://docs.racket-lang.org/reference/quasiquote.html#(form._((lib._racket/private/letstx-scheme..rkt)._quasiquote))" style="color: inherit">quasiquote</a></code>, except that the various unquoting things are preceeded with <code>#</code>. <code>quasisyntax</code>, unsurprisingly also has syntactic sugar coating: <code>(quasisyntax ...)</code> can be written <code>#`...</code>.</p>
<p>I’m not going to describe the patterns in any detail, largely because I only understand the simple cases. However the simple cases are relatively easy to understand and pleasant to use.</p>
<p>Once a case has matched in <code>syntax-case</code> the corresponding expression is evaluated, and its value is the value of the form. Generally that wants to be a bit of syntax.</p>
<p>The first important thing to understand is that <code>syntax</code> is not <code>quote</code>-for-syntax: it interpolates things which matched in a lexically surrounding <code>syntax-case</code>, if there is one (if there isn’t, then I think it <em>is</em> <code>quote</code>-for-syntax).</p>
<p>The second important thing to understand is that <code>syntax-case</code> and <code>syntax</code> turn Racket into a sort of bodged Lisp–2: the things matched by <code>syntax-case</code> can be used <em>only</em> in <code>syntax</code> forms. But it’s not actually a separate namespace, because if you refer to them outwith such a form you get a compile-time error. I don’t know why this is — perhaps to avoid accidentally naming matches outside a <code>syntax</code> form — but it is certainly annoying.</p>
<p>So, here are some examples.</p>
<p>A simple <code>while</code> form:</p>
<pre><code>(define-syntax (while stx)
(syntax-case stx ()
[(_ test body ...)
#'(let loop ()
(when test
body ...
(loop)))]))</code></pre>
<p>A simple implementation of <code>let</code>, leaving out the named-<code>let</code> case, which shows how good the pattern matching is:</p>
<pre><code>(define-syntax (with stx)
(syntax-case stx ()
[(_ ([var val] ...) body ...)
#'((λ (var ...) body ...) val ...)]))</code></pre>
<p>A better implementation which deals with the empty body case (<code>(λ (...))</code> is illegal in Racket) and also optimises a simple case:</p>
<pre><code>(define-syntax (with stx)
(syntax-case stx ()
[(_ () body ...)
;; no vars: trivial case
#'(begin body ...)]
[(_ ([var val] ...))
;; null body: make sure vars are evaluated
#'(begin val ... (void))]
[(_ ([var val] ...) body ...)
#'((λ (var ...) body ...) val ...)]))</code></pre>
<p>One thing which <code>syntax-case</code> allows is the notion of literal names which must occur in the source. So for instance let’s say I wanted to write some mutant <code>loop</code> macro whose syntax was <code>(loop for x in y do ...)</code>: where <code>for</code>, <code>in</code>, <code>do</code> are literals. Well, I can write something to match this:</p>
<pre><code>> (define-syntax (loop stx)
(syntax-case stx (for in do)
[(_ for v in l do body ...)
#'(for ([v (in-list l)]) body ...)]))
> (loop for x in '(1 2 3) do (print x))
123
> (loop with x in '(1 2 3) do (print x))
loop: bad syntax in: (loop with x in (quote (1 2 3)) do (print x))</code></pre>
<p>The syntax object that corresponds to <code>stx</code> here is the whole form: the equivalent to CL’s <code>&WHOLE</code>. It’s almost never necessary to worry about the <code>car</code> of this since it will obviously be <code>loop</code>. However I’m always tempted to provide it as a literal.</p>
<p><code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/stxcase-scheme..rkt)._syntax-rules))" style="color: inherit">syntax-rules</a></code> is (almost: there is some complexity I think) a wrapper around <code>syntax-case</code> which provides the function wrapper for it and which implicitly wraps the right hand side of the cases, which must be just one form, in a <code>syntax</code> form. So the above definition of <code>with</code> could be written:</p>
<pre><code>(define-syntax with
(syntax-rules ()
[(_ () body ...)
;; no vars: trivial case
(begin body ...)]
[(_ ([var val] ...))
;; null body: make sure vars are evaluated
(begin val ... (void))]
[(_ ([var val] ...) body ...)
((λ (var ...) body ...) val ...)]))</code></pre>
<p><code>syntax-rules</code> can be defined something like this (this is due to <a href="https://gist.github.com/tfeb/0b8531c94cf685824626">bmastenbrook</a>):</p>
<pre><code>(require (for-syntax
(rename-in racket
[syntax-rules racket:syntax-rules])))
(begin-for-syntax
(define-syntax syntax-rules
(racket:syntax-rules ()
[(_ literals (pattern expansion) ...)
(lambda (s)
(syntax-case s literals
(pattern #'expansion) ...))])))</code></pre>
<p><code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/misc..rkt)._define-syntax-rule))" style="color: inherit">define-syntax-rule</a></code> combines <code>define-syntax</code> and a single rule for <code>syntax-rules</code>. I <em>think</em> it might be equivalent to this:</p>
<pre><code>(define-syntax define-syntax-rule
(syntax-rules ()
[(_ (name pat ...) expansion)
(define-syntax name
(syntax-rules ()
[(name pat ...) expansion]))]))</code></pre>
<p>although I am probably missing some complexity here.</p>
<p>There is a useful variant on <code>syntax-case</code> called <code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/stxcase-scheme..rkt)._with-syntax))" style="color: inherit">with-syntax</a></code>: it looks more like <code>let</code>-style thing, and <em>all</em> the patterns in the clauses must match, when all the pattern variables will be bound.</p>
<p>So, what about our desirable macros?</p>
<p><code>collect</code> is pretty easy. Here are two different versions. The first uses <code>quasisyntax</code>:</p>
<pre><code>(define-syntax (collecting stx)
(syntax-case stx ()
[(_) #'(void)]
[(_ body ...)
#`(let ([r '()])
(define (#,(datum->syntax stx 'collect) it)
(set! r (cons it r)) it)
body ...
(reverse r))]))</code></pre>
<p>The second uses <code>with-syntax</code>:</p>
<pre><code>(define-syntax (collecting stx)
(syntax-case stx ()
[(_) #'(void)]
[(_ body ...)
(with-syntax ([collect (datum->syntax stx 'collect)])
#'(let ([r '()])
(define (collect it)
(set! r (cons it r)) it)
body ...
(reverse r)))]))</code></pre>
<p>This is pretty nice, I think. Note that you could not do this with <code>syntax-rules</code>, or at least I can’t see how to do it: <code>syntax-rules</code> is quite a lot less general than <code>syntax-case</code>.</p>
<p><code>clet</code> is harder, because each element of the binding list may be either an identifier or a two-element list. If we insisted on a two-element list it would be easy (see above). Here is the best I can do:</p>
<pre><code>(require racket/undefined)
(define-syntax (clet stx)
(syntax-case stx ()
[(_ ()) #'(void)]
[(_ () body ...) #'(begin body ...)]
[(_ (b ...) body ...)
(let-values ([(vars vals)
(for/lists (as vs) ([binding (syntax->list #'(b ...))])
(syntax-case binding ()
[(var val)
(identifier? #'var)
(values #'var #'val)]
[var
(identifier? #'var)
(values #'var #'undefined)]
[_ (raise-syntax-error #f "bad binding" stx)]))])
#`((λ #,vars body ...) #,@vals))]))</code></pre>
<p>Well, this is still quite hairy, but almost all of the hair involves processing the binding list, which is done using <code>syntax-case</code> again, using an additional feature of it whereby it can use a ‘guard’ expression to decide whether a clause matches: <code>identifer?</code> returnt true if a syntax object refers to an identifier. I think there must be a way of using <code>with-syntax</code> to avoid the <code>quasisyntax</code> form.</p>
<p>Even with all this hair, this version of <code>clet</code> is far easier to read than the previous one, and not harder to read than the CL equivalent.</p>
<p>A better version of <code>clet</code> would, I think, need a proper parser for syntax. I think that is what <code><a href="http://docs.racket-lang.org/syntax/Parsing_Syntax.html#(form._((lib._syntax/parse..rkt)._syntax-parse))" style="color: inherit">syntax-parse</a></code> is, although I have not investigated that.</p>
<h2 id="macro-composition">Macro composition</h2>
<p>As mentioned above, we don’t yet have quite all the tools we need to write some kinds of macros: specifically macros which are intentionally slightly unygienic, such as <code>collecting</code>. As an example, let’s suppose we wanted a general purpose, intentionally-unhygenic, <code>with-abort</code> macro which provided an <code>abort</code> function which would, well, abort. Without thinking too hard about the implications of <code><a href="http://docs.racket-lang.org/reference/cont.html#(def._((lib._racket/private/more-scheme..rkt)._call/cc))" style="color: inherit">call/cc</a></code> we could write this as:</p>
<pre><code>(define-syntax (with-abort stx)
(syntax-case stx ()
[(_ body ...)
#`(call/cc (λ (#,(datum->syntax stx 'abort))
body ...))]))</code></pre>
<p>So now <code>(with-abort (abort 2) (end-the-world))</code> returns <code>2</code> and does not end the world.</p>
<p>Well, we might want to use this macro in another macro:</p>
<pre><code>(define-syntax-rule (while/abort test body ...)
(with-abort
(let loop ([r test])
(when r
body ...
(loop test)))))</code></pre>
<p>Now something like the following will work:</p>
<pre><code>> (let ([x 0])
(while/abort (< x 10) (set! x (+ x 1)) (print x)))
12345678910</code></pre>
<p>But the whole point was to be able to use <code>abort</code> in the body, and that <em>doesn’t</em> work:</p>
<pre><code>> (let ([x 0])
(while/abort (< x 10) (set! x (+ x 1)) (when (> x 1) (abort 'done))))
abort: undefined;
cannot reference an identifier before its definition</code></pre>
<p>Oh, dear. The problem here is that <code>while/abort</code> is hygenic, so the <code>abort</code> binding that is introduced by <code>with-abort</code> is not visible in the body.</p>
<p>We could fix this by better design:</p>
<pre><code>(define-syntax-rule (with-named-abort (abort) body ...)
;; a better macro
(call/cc (λ (abort) body ...)))
(define-syntax (with-abort stx)
;; backwards compatible
(syntax-case stx ()
[(_ body ...)
#`(with-abort (#,(datum->syntax stx 'abort)) body ...)]))
(define-syntax (while/abort stx)
;; the end result
(syntax-case stx ()
[(_ test body ...)
#`(with-named-abort (#,(datum->syntax stx 'abort))
(let loop ([r test])
(when r
body ...
(loop test))))]))</code></pre>
<p>But that’s not the solution we’re after.</p>
<p>Racket’s answer to this is <a href="http://www.schemeworkshop.org/2011/papers/Barzilay2011.pdf">syntax parameters</a>. I don’t completely understand these, but they are at least close to dynamic variables, except at macro-expansion time. What you do is to define a syntax parameter, and then rebind it during the expansion: the rebound value is visible to macros which are expanded dynamically within the rebinding form. As with Racket’s <a href="http://docs.racket-lang.org/guide/parameterize.html">ordinary special variables</a> these look like functions (yet another namespace in disguise).</p>
<p>So we can define a syntax parameter called <code>abort</code> using <code><a href="http://docs.racket-lang.org/reference/stxparam.html#(form._((lib._racket/stxparam..rkt)._define-syntax-parameter))" style="color: inherit">define-syntax-parameter</a></code>:</p>
<pre><code>(require racket/stxparam)
(define-syntax-parameter abort
(λ (stx)
(raise-syntax-error #f "not available" stx)))</code></pre>
<p>So now any reference to <code>abort</code> will result in a syntax error:</p>
<pre><code>> (abort)
abort: not available in: (abort)
> abort
abort: not available in: abort</code></pre>
<p>And we can now try to use <code><a href="http://docs.racket-lang.org/reference/stxparam.html#(form._((lib._racket/stxparam..rkt)._syntax-parameterize))" style="color: inherit">syntax-parameterize</a></code>, to rebind <code>abort</code> as a macro:</p>
<pre><code>(define-syntax with-abort
(syntax-rules (with-abort)
[(with-abort) (void)]
[(with-abort body ...)
(call/cc
(λ (a)
(syntax-parameterize ([abort
(syntax-rules ()
[(_ ...) (a ...)])])
body ...)))]))</code></pre>
<p>And this fails horribly, because the outer <code>syntax-rules</code> thinks it owns the patterns and sees <code>...</code>s that it does not expect. So much for that.</p>
<p>Well, we could at least check this works with a specific number of arguments:</p>
<pre><code>(define-syntax with-abort
(syntax-rules (with-abort)
[(with-abort) (void)]
[(with-abort body ...)
(call/cc
(λ (a)
(syntax-parameterize ([abort
(λ (stx)
(syntax-case stx (abort)
[(abort) #'(a)]
[(abort x) #'(a x)]
[_ (raise-syntax-error #f "I give up" stx)]))])
body ...)))]))</code></pre>
<p>But this is obviously just a rubbish answer.</p>
<p>Well, there is an answer to this: all we really need to do is to make the <code>abort</code> macro attach itself to <code>a</code>, and there is a special hack, <code><a href="http://docs.racket-lang.org/reference/stxtrans.html#(def._((quote._~23~25kernel)._make-rename-transformer))" style="color: inherit">make-rename-transformer</a></code>, to do this:</p>
<pre><code>(define-syntax with-abort
(syntax-rules (with-abort)
[(with-abort) (begin)]
[(with-abort body ...)
(call/cc
(λ (a)
(syntax-parameterize ([abort (make-rename-transformer #'a)])
body ...)))]))</code></pre>
<p>And this now works:</p>
<pre><code>> (with-abort (abort 1 2 3))
1
2
3</code></pre>
<p>And we can use this to write a really robust version of <code>collecting</code></p>
<pre><code>(require racket/stxparam)
(define-syntax-parameter collect
(λ (stx)
(raise-syntax-error #f "not collecting" stx)))
(define-syntax collecting
(syntax-rules ()
[(_) (void)]
[(_ body ...)
(let ([r '()])
(define (clct it)
(set! r (cons it r)) it)
(syntax-parameterize ([collect (make-rename-transformer #'clct)])
body ...
(reverse r)))]))</code></pre>
<p>As far as I can see there is still a problem, however: it is very hard to write macros which expand to other macros which themselves do pattern-matching, since the patterns get acquired by the outer macros. There must be some answer to this, but I can’t see what it is.</p>
<p>On the other hand, this is also extremely painful in CL: here is a version of <code>collecting</code> where <code>collect</code> is a local macro:</p>
<pre><code>(defmacro collecting (&body forms)
;; collect lists forwards using a tail pointer
;; local macro version
(let ((rn (make-symbol "R"))
(rtn (make-symbol "RT"))
(itn (make-symbol "IT")))
`(let ((,rn '())
(,rtn nil))
(macrolet ((collect (form)
`(let ((,',itn ,form))
(if (not (null ,',rn))
(setf (cdr ,',rtn) (cons ,',itn nil)
,',rtn (cdr ,',rtn))
(setf ,',rn (cons ,',itn nil)
,',rtn ,',rn))
,',itn)))
,@forms)
,rn)))</code></pre>
<p>This is not easy to understand.</p>
<p>Additionally, the problem almost always comes from ellipses, and in many interesting cases they can be avoided by using dotted pairs as patterns — here is yet another version of <code>with-abort</code> that does this:</p>
<pre><code>(require racket/stxparam)
(define-syntax-parameter abort
(λ (stx)
(raise-syntax-error #f "not available" stx)))
(define-syntax with-abort
(syntax-rules (with-abort)
[(with-abort) (void)]
[(with-abort body ...)
(call/ec
(λ (a)
(syntax-parameterize ([abort
(syntax-rules (abort)
[(abort . args) (a . args)])])
body ...)))]))</code></pre>
<p>This is clearly better than the CL version.</p>
<h2 id="summary">Summary</h2>
<p>Well, I think I now know enough about Racket’s macros to be going on with: I can certainly write the macros I need to be able to write now without it just being cargo-cult programming. There are still things I don’t understand, and the whole system smells to me as if, by trying remain ideologically pure, it has become vast and essentially incomprehensible. This seems to be a common problem with Scheme, unfortunately.</p>
<h2 id="small-notes">Small notes</h2>
<p>Macro definitions scope properly, so you can define a local macro the same way you can define a local function, so this works:</p>
<pre><code>(define (foo ...)
(define-syntax-rule (while test body ...)
(let loop ()
(when test
body ...
(loop))))
... (while ... ...) ...)</code></pre>
<p>This makes the equivalent of CL’s <code>MACROLET</code> easy to do.</p>
<p>For fun, here is a version of <code>with</code> which can deal with named-<code>let</code>: There must be a way of implementing this without assignment, but I can never work out what it is.</p>
<pre><code>(define-syntax (with stx)
(syntax-case stx ()
[(_ ())
;; all null
#'(void)]
[(_ () body ...)
;; no vars: trivial case
#'(begin body ...)]
[(_ ([var val] ...))
;; null body: make sure vars are evaluated
#'(begin val ... (void))]
[(_ ([var val] ...) body ...)
;; normal let
#'((λ (var ...) body ...) val ...)]
[(_ n ())
(identifier? #'n)
;; named null
#'(void)]
[(_ n ([var val] ...))
(identifier? #'n)
;; named null body
#'(begin val ... (void))]
[(_ n ([var val] ...) body ...)
;; named let with arguments
;; (is there an implementation without assignment?
(identifier? #'n)
#'((λ (n)
((λ (l)
(set! n l)
(l val ...))
(λ (var ...) body ...)))
#f)]
[_ (raise-syntax-error #f "bad syntax" stx)]))</code></pre>
<h2 id="things-i-still-do-not-know-or-understand">Things I still do not know or understand</h2>
<p>At this point I’m mostly comfortable writing macros in Racket, but there are things I still do not understand:</p>
<ul>
<li>protecting and arming syntax objects — I just don’t understand what this is about at all;</li>
<li><code><a href="http://docs.racket-lang.org/syntax/Parsing_Syntax.html#(form._((lib._syntax/parse..rkt)._syntax-parse))" style="color: inherit">syntax-parse</a></code> is, I think, not difficult but I have not bothered to learn about it as it seems to add yet another layer.</li>
<li>there are probably other things that I don’t even know I don’t know.</li></ul>
<p>At some point I might write a further part of this series on some of that.</p>
<hr />
<h2 id="pointers">Pointers</h2>
<p><a href="http://www.schemeworkshop.org/2011/papers/Barzilay2011.pdf">Eli Barilay’s paper on <code>syntax-parameterize</code></a>.</p>
<p><a href="http://www.greghendershott.com/fear-of-macros/index.html">Fear of Macros</a>, again.</p>Pentax film SLRsurn:https-www-tfeb-org:-fragments-2015-01-15-pentax-film-slrs2015-01-15T11:07:32Z2015-01-15T11:07:32ZTim Bradshaw
<p>People often ask which Pentax film SLR to get. In brief: get an MX with a 50/1.4.</p>
<!-- more-->
<p>In more but still very partial detail, based only on cameras I have used.</p>
<p>The lens is what really matters. The 50/1.4 is a very fine 50mm lens and if you are a 50mm-lens person it is the lens you want. It’s also relatively cheap because Pentax are mostly not a cult brand. Many of the ‘M’ range cameras came with this lens, as did the LX. Lower-end cameras often came with a slower 50mm which is probably not as good: not because it’s slower but because it’s just not as good.</p>
<p>The MX, and ME / ME Super were part of Pentax’s ‘M’ range. There are others in the range. They are all metal and well-made.</p>
<ul>
<li>ME / ME Super. The second is the successor to the first. The ME <em>does not have a manual mode</em> so you definitely want the ME Super. They are pretty common. You select the speed in the ME Super with a pair of buttons, not a dial, which is a bit annoying to use. It is a long time since I used one but I don’t think there is a DoF preview, which is also annoying. The viewfinder is as good as the MX’s I think (again, a long time ago).</li>
<li>MX. This was Pentax’s professional camera before the LX. It’s a fully-mechanical metered-manual camera, with everything you could want from such a camera. The viefinder is wonderful: with a 50mm lens you can use the camera with both eyes open.</li></ul>
<p>Other Pentaxes to consider.</p>
<ul>
<li>Super A / Program A (may have different names in the US). These were the spiritual descendents of the ME Super and were fine I think, the Super A is more serious. I once owned a Program A but I have forgotten what it was like.</li>
<li>LX. Replaced the MX, and was a very serious professional camera which avoided the bloat which was already afflicting Nikon. They do go wrong (mine has, several times), but they can be repaired. Lovely to use.</li></ul>
<p>Pentaxes to avoid.</p>
<ul>
<li>Anything with a Z such as the MZ and so on. Late-film-era plasticy horrid things. Nothing wrong with them for what they are, but what they are is not a good thing to be: why would you buy one?</li>
<li>K1000. Student camera, lower-spec than the MX and now probably more expensive because it has become one of the cult film cameras. The reason it’s <em>become</em> a cult film camera is because people on photography courses were given it to use and so lots of attractive studenty people used to carry them around, 20 years ago, and the myth has persisted. There’s nothing <em>wrong</em> with it, but why not buy an MX.</li></ul>
<p>Summary: get an MX, with a 50/1.4 if you can. If you want a more interesting camera try and find an LX, also with a 50/1.4 (same lens, same image quality!). If you want something more automatic look at a Super A, an ME Super or, again, an LX. The LX may break down in interesting ways but will be fixable. The MX will never go wrong.</p>Macros in Racket, part oneurn:https-www-tfeb-org:-fragments-2015-01-13-macros-in-racket-part-one2015-01-13T14:45:48Z2015-01-13T14:45:48ZTim Bradshaw
<p>I’ve written in Lisp for a long time, but I’ve never used a hygienic macro system in any way other than the most simple. Here are some initial notes on my experiences learning <a href="http://racket-lang.org/">Racket</a>’s macro system.</p>
<!-- more-->
<p>This is the first part of several: see <a href="../../../../2015/01/28/macros-in-racket-part-two">part two</a> and <a href="../../../../2015/12/12/macros-in-racket-part-three/">part three</a>. I’m not completely fluent with Racket macros yet: there are almost certainly mistakes and confusions here. Despite appearances, I also have no axe to grind: I’m learning Racket because I want to and I have time. Finally this is not a tutorial: look at Greg Hendershott’s <a href="http://www.greghendershott.com/fear-of-macros/index.html">Fear of Macros</a> for something closer to that. This is just some notes which were useful to me, and might be useful to other CL people.</p>
<h2 id="macros-in-common-lisp">Macros in Common Lisp</h2>
<p><a href="http://www.lispworks.com/documentation/common-lisp.html">Common Lisp</a>’s macro system is, in essence, simple: it’s what you’d end up writing if you had to write a macro system for a Lisp. That’s not surprising because it <em>is</em> the descendent of the first macro systems people wrote for Lisp. In CL what happens is this:</p>
<ol>
<li>the reader ingests the source text and produces data structures which represent the source of the program;</li>
<li>these structures are possibly transformed by macros, which are simply Lisp functions which are given the Lisp representation of the source and return some other representation;</li>
<li>once all macros are expanded, then the code is compiled, evaluated or both.</li></ol>
<p>(I have missed out some subtleties here, but they don’t matter for my purposes.)</p>
<p>In CL, what the reader produces is exactly what you would expect. If it reads <code>"(defun foo (a) a)"</code> then, with standard settings, it returns a list whose car is the symbol <code>DEFUN</code> (in the <code>CL</code> package) and so on. It is this structure that macros transform.</p>
<p>CL provides relatively limited support for writing macros: there is backquote, which is critical to being able to write macros which are even slightly readable, limited pattern matching in the form of destructuring, and there are mechanisms to generate unique names as well a few other things. There is a semi-standard way of enquiring about bindings in the environment at macro expansion time, although this is not in the standard.</p>
<p>In practice, CL’s macro system has turned out to work very well; in theory it has all sorts of problems, the most important being that the programmer is entirely responsible for making sure that macros don’t introduce or accidentally use names they should not. Consider this:</p>
<pre><code>(defmacro collecting (&body forms)
;; collect lists forwards using a tail pointer
;; polluting version
`(let ((r '())
(rt nil))
(flet ((collect (form)
(if (not (null r))
(setf (cdr rt) (cons form nil)
rt (cdr rt))
(setf r (cons form nil)
rt r))
form))
,@forms)
r))</code></pre>
<p>This intentionally introduces a function binding, <code>collect</code>, but also accidentally introduces bindings for <code>r</code> and <code>rt</code>.</p>
<pre><code>(let ((r 2))
(collecting
(+ r r)))</code></pre>
<p>Does not do what it should. One right way to write the <code>collecting</code> macro is like this:</p>
<pre><code>(defmacro collecting (&body forms)
;; collect lists forwards using a tail pointer
;; non-polluting version
(let ((rn (make-symbol "R"))
(rtn (make-symbol "RT")))
`(let ((,rn '())
(,rtn nil))
(flet ((collect (form)
(if (not (null ,rn))
(setf (cdr ,rtn) (cons form nil)
,rtn (cdr ,rtn))
(setf ,rn (cons form nil)
,rtn ,rn))
form))
,@forms)
,rn)))</code></pre>
<p>And now the above form does not signal an error and correctly returns <code>()</code>.</p>
<p>Note that the problem is with <em>names</em> and not just bindings. Consider this CL code:</p>
<pre><code>(defvar *stashes* '())
(defvar *mark* nil)
(defun stash (name thing)
;; Stash something under a name
(setf *stashes* (acons name thing *stashes*))
(values name thing))
(defun retrieve (name)
;; Retrieve the value of a name, dropping everything stashed more
;; recently, and stopping at the mark, if any.
(let ((mark *mark*))
(labels ((rl (tail)
(if (or (null tail)
(eq (first tail) mark))
(values nil nil)
(destructuring-bind ((n . v) . r) tail
(if (eql n name)
(progn
(setf *stashes* r)
(values v t))
(rl r))))))
(rl *stashes*))))
(defmacro with-marked-stash (&body forms)
;; mark the stack of stashes for the dynamic extent of FORMS
(let ((mn (make-symbol "MARK")))
`(let ((*stashes* (cons ',mn *stashes*))
(*mark* ',mn))
,@forms)))</code></pre>
<p>In this code the marks on the stack of stashes established by <code>with-marked-stash</code> are not bound anywhere: they are just names. But it’s important to the correct functioning of the code that they are <em>unique</em> names. (There are better ways of doing this such as using a fresh cons for the mark: I just wanted an example where a name mattered other than as the name of a variable.)</p>
<p>The politically correct way of saying that we’re talking about names is to talk about ‘lexical context’ or ‘lexical information’: it’s the same thing but more confusing to those not initiated into the cult, which is always good.</p>
<p>The disadvantages of the CL macro system are this problem with hygiene and the lack of any clever tools to do pattern matching on macro forms. The second of these is easily overcome by using any of a number of tools, while the first is generally not a problem in practice: CL being a Lisp–2 (separate namespaces for functions and variables) helps here.</p>
<p>The advantage of the CL macro system is that there is no magic: macros get passed the things that the source code looks like — generally a structure whose interesting parts are lists and symbols — which you process using the normal list-processing tools to produce some other structure which is the expansion of the macro. It’s easy enough that you could write it yourself: there are no special opaque objects being handed around.</p>
<p>That being said, having a <em>standard</em> set of tools for pattern matching in macros and a way of dealing with the hygiene problems which is less ugly than in CL might well be worth the cost in transparency.</p>
<h2 id="macros-in-scheme">Macros in Scheme</h2>
<p>I am not a native <a href="https://en.wikipedia.org/wiki/Scheme_%28programming_language%29">Scheme</a> person, but it has clearly taken the whole hygiene thing very seriously: Scheme, as a set of languages, treats purity as much more than CL, which revels in being a fairly grungy language, does. However these posts are not about Scheme: the only reason I am mentioning it is to say that I have not cared at all whether anything here applies generally to Scheme or is specific to Racket.</p>
<h2 id="macros-in-racket-baby-steps">Macros in Racket: baby steps</h2>
<p>For a long time the only kind of macros that I’ve really been able to define in Racket are annoyingly trivial ones using <code><a href="http://docs.racket-lang.org/reference/stx-patterns.html#(form._((lib._racket/private/misc..rkt)._define-syntax-rule))" style="color: inherit">define-syntax-rule</a></code>, things like:</p>
<pre><code>(define-syntax-rule (while test body ...)
(let loop ()
(when test
body ...
(loop))))</code></pre>
<p>That’s all very well, but the ‘obvious’ (and obviously wrong) definition of <code>collect</code> then looks like this:</p>
<pre><code>(define-syntax-rule (collecting body ...)
;; horribly wrong
(let ([s '()])
(define (collect it)
(set! s (cons it s))
it)
body ...
(reverse s)))</code></pre>
<p>(There’s no obvious way to build lists backwards in Racket: reversing the list is probably as cheap as anything). This is either introducing a spurious binding for <code>s</code> or not introducing a deliberate one for <code>collect</code>, and in fact, of course, it’s the latter.</p>
<p>Quite apart from this, <code>define-syntax-rule</code> gives the strong impression that it lets you write only the sort of macros that would give people who write C++ great pride: simple ones. (Actually you can do reasonably hairy things even with this because the pattern matching is very competent:</p>
<pre><code>(define-syntax-rule (mlet ([var val] ...) body ...)
((λ (var ...) body ...) val ...))</code></pre>
<p>is an implementation of simple <code>let</code>, for instance. Indeed we can defined named <code>let</code> as well:</p>
<pre><code>(define-syntax-rule (nlet label ([var val] ...) body ...)
(mlet ()
(define (label var ...) body ...)
(label val ...)))</code></pre>
<p>What I <em>can’t</em> work out how to do is to make <code>mlet</code> do both things: I think this is too hard for <code>define-syntax-rule</code> although I might be wrong.)</p>
<p>But for a long time I was stuck with that: whenever I looked at Racket macros in more detail I walked into a wall of opaque terminology and just decided that I had better things to do that year. This year, I don’t.</p>
<h2 id="two-desirable-macros">Two desirable macros</h2>
<p>There are many ways people use macros in Lisp: some of them are good. I decided that if I could write two macros <em>and understand them</em> then I would be well on my way.</p>
<ul>
<li><code>collecting</code> / <code>collect</code>. This is the macro given above in CL. It’s interesting not for what it does — the tail-pointer stuff is less interesting now than it once was and is hard to implement in Racket anyway — but because it introduces a binding: it is intentionally not completely hygienic, while having an essentially trivial expansion: no complicated destructuring is needed.</li>
<li>CL’s <code>let</code>, which I’ll call <code>clet</code>. This is interesting because it requires destructuring of arguments which is not completely simple, but it does not present problems of hygiene. The reason it’s not just a subset of Racket’s <code><a href="http://docs.racket-lang.org/reference/let.html#(form._((lib._racket/private/letstx-scheme..rkt)._let))" style="color: inherit">let</a></code> is that CL allows variables with no initial value, which get bound to <code>nil</code> and should, I think, become <code>undefined</code> in Racket. So <code>(clet ((x 1) y) body ...)</code> should expand to <code>(let ([x 1] [y undefined]) body ...)</code> or something equivalent to that.</li></ul>
<p>Here is a simple implementation of <code>clet</code> in CL, missing any error checking:</p>
<pre><code>(defmacro clet (bindings &body forms)
(multiple-value-bind (args vals)
(loop for binding in bindings
for consp = (consp binding)
collect (if consp (first binding) binding) into as
collect (if consp (second binding) nil) into vs
finally (return (values as vs)))
`((lambda (,@args) ,@forms) ,@vals)))</code></pre>
<p>Like most macros in CL it’s not particularly pretty but it is reasonably clear what it does.</p>
<p>I will use these two macros as examples below.</p>
<h2 id="phases">Phases</h2>
<p>To understand macros in any Lisp you need to develop a strong idea of the various ‘times’ that things happen and the relationships between them: for CL these are things like read time, macro expansion time, compilation time (compiler-macro expansion time), load time, run time and so on. Racket has formalised the parts of this after read time into a notion of ‘phase’:</p>
<ul>
<li>phase 0 is run-time;</li>
<li>phase 1 is macro expansion time;</li>
<li>phase 2 would, I think, be macros used in macro expansion;</li>
<li>and so on.</li></ul>
<p>However I am not sure how this ties in to read time: is that phase 1? For CL read time is <em>before</em> macro expansion time although the two are, or may be, interleaved at the granularity of forms (rather than a per-file or per-compilation-unit). Also there are negative phases which I don’t understand, although I think they must be to do with code which exists at macro expansion time (phase 1) wanting to make things available at run time (phase 0). All of this is integrated into the module system (and CL gets away without it mostly because it does not have a formalised module system).</p>
<p>Bindings exist at a phase, and the same name can have different bindings at different phases.</p>
<p>Modules can say what they <code><a href="http://docs.racket-lang.org/reference/require.html#(form._((lib._racket/private/base..rkt)._provide))" style="color: inherit">provide</a></code> at which phase, and, importantly, the <code>racket</code> module does indeed provide different things at different phases: if you look at it you’ll find:</p>
<pre><code>(provide ...
(for-syntax (all-from-out racket/base)))</code></pre>
<p>Which means that, at phase 1, what is available is <code>racket/base</code>: a significantly smaller language than <code>racket</code> itself. If you need things in macros which are in <code>racket</code> but not <code>racket/base</code> you need to <code><a href="http://docs.racket-lang.org/reference/require.html#(form._((lib._racket/private/base..rkt)._require))" style="color: inherit">require</a></code> them:</p>
<pre><code>(require (for-syntax ...))</code></pre>
<p>An example of this is <code><a href="http://docs.racket-lang.org/reference/pairs.html#(def._((lib._racket/list..rkt)._first))" style="color: inherit">first</a></code> & <code><a href="http://docs.racket-lang.org/reference/pairs.html#(def._((lib._racket/list..rkt)._rest))" style="color: inherit">rest</a></code>, both of which are provided at phase 0 by <code>racket</code> but <em>not</em> at phase one: if you want them you need to say <code>(require (for-syntax racket/list))</code>.</p>
<h2 id="syntax-objects">Syntax objects</h2>
<p>As in CL, Racket macros are source-to-source functions. The difference is that in Racket the source is represented by a <a href="http://docs.racket-lang.org/reference/syntax-model.html">syntax object</a> and a macro needs to produce another syntax object, while in CL source is represented as it looks: usually as nested lists.</p>
<p>So then a Racket macro is simply a function which maps from syntax objects to other syntax objects. The reason for having an opaque syntax object is that it can carry around all sorts of information around with it, and in particular it can carry information about <em>names</em>, which help the system maintain hygiene. (There is also information about source location and so on, but this isn’t so important.)</p>
<p>So the Racket macro system needs tools to transform syntax objects into other syntax objects, ultimately by digging around inside them to find out what the source code actually was. This is necessarily more complicated than it is in CL both because the objects are opaque and because they contain information which is not present at all in the objects CL macros get.</p>
<p>Additionally, and mostly independently, there is a layer on top of this which does not exist in CL (without libraries) at all: pattern matching and template filling. This means that for many purposes you can write macros in Racket simply by specifying patterns that the source must match and filling templates with the results of those matches. This is a very nice way of writing macros, although it renders what is actually going on even more opaque. For a CL person, used to feeling the bits between their toes, this can be quite disconcerting at first since what is actually <em>happening</em> can become entirely obscure.</p>
<h2 id="syntax-objects-for-the-unwashed-lisp-hacker">Syntax objects for the unwashed Lisp hacker</h2>
<p>Well, of course it is possible to ignore all this terrifyingly modern pattern matching stuff and write macros almost the way you do in CL, and it’s worth doing that at least once, perhaps. So here is <code>clet</code>:</p>
<pre><code>(require (for-syntax racket/list)
racket/undefined)
(define-syntax clet
(λ (stx)
(define ctx (quote-syntax clet))
(define top-level (syntax->list stx))
(define bindings (second top-level))
(define body (rest (rest top-level)))
(define-values (args vals)
(for/lists (as vs) ([binding (syntax->list bindings)])
(define it (syntax->list binding))
(if it
(values (first it) (second it))
(values binding (datum->syntax ctx 'undefined)))))
(datum->syntax
ctx
`((λ (,@args) ,@body) ,@vals))))</code></pre>
<p>So how does this work? Well, it uses some functions provided by Racket to look inside the syntax object (getting the ‘datum’ in the syntax object) and in turn to construct a new one:</p>
<ul>
<li><code><a href="http://docs.racket-lang.org/reference/stxops.html#(def._((quote._~23~25kernel)._syntax-~3elist))" style="color: inherit">syntax->list</a></code> takes a syntax object which wraps a proper list and unpacks one level of it, returning a list of syntax objects, or <code>#f</code> if it does not wrap a proper list;</li>
<li><code><a href="http://docs.racket-lang.org/reference/stxops.html#(def._((quote._~23~25kernel)._datum-~3esyntax))" style="color: inherit">datum->syntax</a></code> takes a context object and a datum and wraps it into a syntax object, leaving any syntax objects in the datum as they are;</li>
<li><code><a href="http://docs.racket-lang.org/reference/Syntax_Quoting__quote-syntax.html#(form._((quote._~23~25kernel)._quote-syntax))" style="color: inherit">quote-syntax</a></code> is like <code><a href="http://docs.racket-lang.org/reference/quote.html#(form._((quote._~23~25kernel)._quote))" style="color: inherit">quote</a></code> but it creates a syntax object, and this object contains the lexical information present in the source.</li></ul>
<p>So the macro pulls apart the syntax object in a fairly straightforward way: making it into a list, extracting the second element and all the remaining elements, which will be the binding specifications, and then grinding over the binding specifications, using <code>syntax->list</code> both to work out if the bindings are a list or not and to extract the variable and value if it is, and then reassembles everything as a call to an anonymous function.</p>
<p>The critical trick is that the context that <code>datum->syntax</code> needs <em>is a syntax object</em> and you need to pick the right one: you can use the syntax object you got given, which provides the context of the place where the macro was expanded, or you can use a syntax object of your own devising which provides that object’s context. And in this case we want our own context, not the context of place where the macro was expanded. This is what <code>ctx</code> is for: providing a suitable context.</p>
<p>Notice the <code>require</code>:</p>
<ul>
<li>we need <code>racket/list</code> at phase 1 (macro expansion time) because the macro uses <code>first</code> and so on;</li>
<li>we need <code>racket/undefined</code> at phase 0 (run time) as the expansion of the macro uses <code>undefined</code>.</li></ul>
<p>So we can try this:</p>
<pre><code>(clet ((x 12) y) (values x y))
12
#<undefined>
> (let ((undefined 'hello)) (clet (x) x))
#<undefined>
> (clet ((undefined 'hello)) (clet (x) x))
#<undefined>
> (clet ((x 1)))
λ: bad syntax in: (λ (x))
> (clet (1) 1)
λ: not an identifier, identifier with default, or keyword in: 1</code></pre>
<p>The second and third examples show why we need the macro context: we don’t want a binding of <code>undefined</code> to alter what the <code>clet</code> picks as the undefined value. The fourth and fifth examples show that the macro isn’t very robust, and has terrible error reporting.</p>
<p>Some notes:</p>
<ul>
<li>I’ve deliberately written <code>(define-syntax clet (λ (stx) ...)</code> rather than the more pleasant <code>(define-syntax (clet stx) ...)</code> to make it clear that <code>clet</code> is a function which transforms a syntax object;</li>
<li>but I’ve used internal <code><a href="http://docs.racket-lang.org/reference/define.html#(form._((lib._racket/private/base..rkt)._define))" style="color: inherit">define</a></code> where in CL there would be <code>let*</code> or nested <code>let</code>s — I’m not sure why other than reducing indentation;</li>
<li>the destructuring of the syntax object is done in a way which is primitive even by the standards of CL;</li>
<li>it should be evident that the macro is not very robust — something like <code>(clet ((x 1) 2) ...)</code> will fail horribly;</li>
<li>it’s not <em>much</em> less clear than the CL version, although I think it is a bit less clear.</li></ul>
<p>I am fairly but not completely sure that this macro is right: I am slightly confused by the handling of <code>undefined</code>: although it is easy to check, by wrapping <code>clet</code> into a module, that clients of that module don’t themselves need to import <code>racket/undefined</code> and do get the right initial values in forms like <code>(clet (x) ...)</code> I am still a bit queasy about what it’s doing.</p>
<p>What is very clear is that this macro is just horrible: even by the standards of CL macros it’s horrible, because there is so much explcit unpacking and repacking going on. Things would be even worse if there was any significant error checking. Something better than this is needed to deal with syntax objects, in a way that it isn’t needed for CL macros. In <a href="../../../../2015/01/28/macros-in-racket-part-two">next week’s exciting episode</a> I’ll look at ways of making this better.</p>
<hr />
<h2 id="pointers">Pointers</h2>
<p><a href="http://blog.racket-lang.org/2011/04/writing-syntax-case-macros.html">Writing ‘syntax-case’ Macros</a> by Eli Barzilay. This was the article that first helped me understand what was going on.</p>
<p><a href="http://www.greghendershott.com/fear-of-macros/index.html">Fear of Macros</a> by Greg Greg Hendershott. This is an introduction to macros, and macros in Racket in particular, by the author of Frog.</p>The cult of programmingurn:https-www-tfeb-org:-fragments-2015-01-05-the-cult-of-programming2015-01-05T19:24:26Z2015-01-05T19:24:26ZTim Bradshaw
<p>Programming is <em>not meant to be easy</em> and it’s important to make sure that it is as cryptic as possible otherwise people other than cult members might be able to understand it. Of course, you also need to make sure it’s <em>pure</em>, because otherwise cult members will laughingly throw you into a pit full of spikes and the rotting remains of other heretics.</p>
<!-- more-->
<p>For instance, you can’t be writing this sort of thing:</p>
<pre><code>(defun ss (n)
(let ((s 0) (i 0))
(tagbody
loop
(when (> i n) (go done))
(setf s (+ s (* i i))
i (+ i 1))
(go loop)
done
(return-from ss s))))</code></pre>
<p>This is just terrible code. Non cult members may well be able to understand it, and the cultists will have you in the pit before you know it.</p>
<p>You might think this was better</p>
<pre><code>(defun ss (n)
(loop for i from 0 to n
summing (* i i)))</code></pre>
<p>But in fact it’s far worse. Fellow cultists will definitely still be at the laughing and pit-throwing, and the others will certainly understand it <em>and laugh at you</em> because you don’t know the closed form.</p>
<p>Instead, you must write this:</p>
<pre><code>(define (ss n)
(let-values ([(a i l) (call/cc (λ (c) (values 0 0 c)))])
(l (+ a (* i i))
(+ i 1)
(if (< i (- n 1))
l
(λ (a i l) a)))))</code></pre>
<p>This is almost a perfect solution. It’s so achingly pure and cryptic that you will be immediately appointed king of the cult and be able to do your own laughing, and throw other members into pits you have first made them dig, for which they will thank you as they slide down the spikes. Non cult members stand essentially no chance of understanding what it does and sniping about the whole silly closed-form thing: certainly the only way they will be able to learn what it does is by first joining the cult, at which point, as king, you can just throw them straight into the pit.</p>
<p>It’s important you understand this.</p>The end of the worldurn:https-www-tfeb-org:-fragments-2014-12-31-the-end-of-the-world2014-12-31T17:03:07Z2014-12-31T17:03:07ZTim Bradshaw
<p>Investment bankers are often called ‘sharks’ and this is, in fact, a good description. There is nothing wrong with sharks: they are beautiful animals designed by billions of years of natural selection to do one thing extremely well. You can not expect a shark to be other than a shark: rather you must understand how sharks behave and arrange matters so as not to be eaten. Governments can do this for banks: they did it in 1933, after all, and it served us well for nearly 70 years. However governments entirely failed to do this after the events of 2008 for reasons of stupidity and corruption.</p>
<!--more-->
<p>In 2016 their failure to act came home to roost: the housing bubble in the UK collapsed causing a cascade failure of investment and retail banks. Initially this was confined to the UK but it rapidly spread to the US. The retail banking system held together for a while after its forced nationalisation, but decades of IT mismanagement combined with the exodus of staff, mostly to China and South America, led to its final collapse in early 2017. Starvation began in the US in February: in the UK food shortages, already serious, became acute in May of that year. After May it becomes hard to disentangle events: we know that the US launched a strike against the Russian federation in June to acquire grain, and that there was a limited exchange of nuclear weapons. The most serious result of this was the effective destruction of the internet and telephone networks as a result of EMP and the resulting loss of almost all communication and reliable records. The US also launched an attack on Canada, once again for grain, to which the UK responded, ostensibly to defend a Commonwealth country but probably in fact to secure Canadian grain stocks for itself. The resulting nuclear exchange destroyed London and most of the US east coast cities not destroyed by the earlier US-Russian exchange. Much more seriously the weather effects from these two exchanges of weapons caused the near-complete failure of the harvest in Canada and the northern parts of the US in 2017. Late in 2017 there was an attempted invasion of the UK and France by remaining US armed forces: this was repulsed with a further exchange of nuclear weapons. This is often known as ‘the second American war’ and also as ‘the fall of France’: before this time France, as the only country with an energy strategy not dependent on oil (which had become scarce after the Russians nuked most of the Middle East in the earlier Russian-US war), was relatively stable, but the US specifically targeted French nuclear plants in the second war, leading to the failure of French infrastructure. By the end of 2017 over two hundred million US citizens had starved and there was no effective government in the US, Canada, the UK, France, or Russia.</p>
<p>Things got a lot worse later, of course.</p>Playing cards with the Devilurn:https-www-tfeb-org:-fragments-2014-12-30-playing-cards-with-the-devil2014-12-30T13:05:55Z2014-12-30T13:05:55ZTim Bradshaw
<p>You are playing cards with the Devil, the prize being your soul.</p>
<!--more-->
<p>The game is very simple:</p>
<ol>
<li>the Devil deals two cards face down;</li>
<li>you both turn over the cards — if they are the same colour then you win, if they are different colours the Devil wins.</li></ol>
<p>You play six times, and the Devil wins every time. He has obviously stacked the deck.</p>
<p>You suggest to the Devil that you should play the opposite game: if the cards are <em>different</em> colours you win. He agrees, and you play six times: he wins every time. Obviously He either saw this coming or He has changed the deck while you were not looking.</p>
<p>You suggest a third game:</p>
<ol>
<li>the Devil will deal two cards, face down;</li>
<li>after the cards are dealt you will choose which game to play — the one where you win if they are the same, or the one where you win if they are different;</li>
<li>the cards are turned over.</li></ol>
<p>Much to your surprise he agrees to play once more, and you play six times. The Devil wins every time.</p>Rerooting Frogurn:https-www-tfeb-org:-fragments-2014-12-29-rerooting-frog2014-12-29T17:15:25Z2014-12-29T17:15:25ZTim Bradshaw
<p><a href="https://github.com/greghendershott/frog">Frog</a> wants to create blogs which hang directly under <code>/</code>. I want mine to live under a subdirectory, and to have all its data living under that directory. I’ve made some changes to Frog to support that. As of 20150702 these changes have been merged to the main frog repo: you no longer need to refer to mine, which is obsolete.</p>
<!-- more-->
<p>What I did was to add a new parameter, <code>uri-prefix</code> (implemented in the code as <code>current-uri-prefix</code>) and write a function which converts between the original name and whatever external name is wanted: at the moment this just adds the prefix but it has ambitions. Most of the problem was then finding all the places where absolute URIs were assumed in the code, and I’m not sure I’ve done that — Racket does not seem to have very good tools for understanding the structure of any significant body of code, which I found surprising: perhaps I am spoiled by the very wonderful <a href="http://www.lispworks.com/">LispWorks</a> code browsing tools.</p>
<p>These fixes could be found on <a href="https://github.com/tfeb/frog">GitHub</a>, on the <code>uri-root-fix</code> branch: this is no longer needed as improved versions are now in the main frog repo.</p>
<h2 id="a-theory-of-names">A theory of names</h2>
<p>The underlying problem here is that you need a <em>theory of names</em> to do this sort of thing: rather than saying ‘things of type x live in <code>/things/x/...</code>’ and then discovering that in fact they should live in <code>/x/things/...</code> or something, the right answer is to keep the location in some representation which:</p>
<ul>
<li>doesn’t commit you to what the final pathname, URI or whatever is;</li>
<li>has all the information you need to generate the final representation, including the ability to carry around completely arbitrary information;</li>
<li>can not be confused for the final representation by the program.</li></ul>
<p>Then you can write mapping functions, including extensible mapping functions, to invent the names you actually need from the objects you have.</p>
<p>Common Lisp’s <a href="http://www.lispworks.com/documentation/HyperSpec/Body/19_c.htm">logical pathnames</a> are an early effort in this direction: they offer the ability to translate a logical pathname into a physical pathname in various ways. But they’re not the right answer simply because they are pathnames: they can (and are designed to) leak into functions which expect pathnames, and can also leak into places where strings are expected, since pathnames have representations as strings. It’s important that whatever representation is used for logical names is <em>not</em> compatible with code which wants, for instance, to emit URIs, so that you are forced to map things everywhere they are needed. In addition the mappings you can define for logical pathnames are not really general enough.</p>
<p>Note that it’s not enough to have a good approach to manipulating structured pathnames, URIs or whatever, because those are the <em>wrong type of thing</em> to manipulate.</p>Firsturn:https-www-tfeb-org:-fragments-2014-12-21-first2014-12-21T13:08:23Z2014-12-21T13:08:23ZTim Bradshaw
<p>I often find myself writing fairly substantial mail messages, posts or comments to posts, which inevitably get lost. This is a way to keep some of them, I hope.</p>
<!-- more-->
<p>I’ve resisted anything like this because it involves one of two things:</p>
<ul>
<li>keeping my information in a system run and owned by some third party, which system and third party will at some point change beyond recognition, vanish, start trying to sell me things or do some combination of these things;</li>
<li>running and maintaining a complex and probably insecure system <em>which is visible to the entire internet</em>.</li></ul>
<p>Neither of these options is very attractive.</p>
<p>There is a third way: if you’re willing to give up features like comments (and why would I care what anyone else thinks?) then it’s possible to create a blog using only static data, which minimises the risk and pain. This is what I’m now doing, using <a href="https://github.com/greghendershott/frog">Frog</a>. The worst case is that Frog goes away and I need to find some other tool: in either case I should be able to avoid losing information.</p>