Markdown for Everything

I like to use markdown for everything: for my blogs, my emails (though they are mostly plain text), most of my short technical writing, and for most presentations. There have been many reasons for this shift:

  1. Markdown is so much faster. Creating a Power Point presentation from an existing writeup or notes used to take a couple of hours; when I started using LaTex/Beamer, the time came down to about half an hour, but markdown reduced it to as little as 15 minutes.

  2. Because markdown is designed to look like plain text (and not like lots of tags and formatting instructions), it forces me to focus on the content. Instead of wasting time adding bells and whistles to a inane document or slides (as I often used to do with MS Office and LaTex), it is much better to spend time improving what I am actually saying.

  3. The fact that markdown is basically plain text, means that it is easy to convert it into almost any other format without loss of formatting. My favourite way to send a Word file to a colleague is to use pandoc to convert my markdown text into Word. LaTex is at the other extreme: because it is publishing quality, conversion to almost any other format loses information.

  4. It ties in with my long journey towards minimalism (see my blog post a couple of years ago and the two earlier posts referred to there). Less is More.

I am able to do so much with markdown because of several great tools like pandoc and jupyter.


Pandoc is the workhorse of document conversion. Years after I started using it, I keep discovering new features and capabilities of this remarkable software. It can convert from almost any format to any other format. Its default behaviour is usually good enough for most purposes, but it can be customized through a bewildering range of options and templates. And it is all open source.

Jupyter Notebook and RISE

Many years ago, I started using Jupyter to perform interactive computations in a web browser. It helps me to demonstrate complex models by executing python code, and viewing interactive plots using mathplotlib. I discussed some of this in a blog post two years ago.

More recently, I started using RISE which marries reveal.js with Jupyter to create a powerful presentation tool. Jupyter-RISE brings interactivity to the slides in a way that Power Point and LaTex/Beamer users cannot even dream about. In Jupyter-RISE, it is so easy to edit a sentence not only without leaving the presentation, but without even leaving full screen mode. It is even possible to start with a blank slide and fill it up with audience responses. The only other similar interactive solution that I am aware of is Xournal connected to a writing tablet.

If you have the Jupyter server running somewhere in the network, then you need only a web browser to launch the presentation. Within my Institute, that means I can walk into any classroom or meeting room empty handed, point the web browser to my Jupyter server running on one of the machines in the local network and get going. While travelling outside, and not carrying my laptop, I have often carried a portable installation of the open source WinPython on a pen drive. I simply plug it into any available machine, and start Jupyter.

Wrapper and Glue

To make my life easier, I have written a set of wrapper or glue scripts to convert to/from markdown using various open source tools. All the hard work is done by these great tools. My scripts only glue them together to do the job quickly and painlessly. These scripts are available on my github repo along with the usage instructions.

For example, there is a script to create Jupyter-RISE slides from markdown. I write the content using markdown, then mark the beginning of slides, subslides and fragments using a comment line with the appropriate keyword:

<!-- slide|sub-slide|fragment -->

The script is basically a pre-processor and wrapper around Jupyter-nbconvert that turns this file into a Jupyter-RISE notebook.

I can intersperse code cells demarcated by lines containing three backticks (```). But if my presentation is mostly python code with a bit of markdown here and there, I start with a python file, and intersperse the markdown as comments in this file. The script is basically a pre-processor and wrapper around Jupyter-nbconvert that turns this file into a Jupyter-RISE notebook. Following comment lines can be used in the python file to demarcate cells:

# <codecell>: [slide|sub-slide|fragment]
# <markdowncell>: [slide|sub-slide|fragment]
#% ipython magic commands like %matplotlib inline

The advantage is that the entire file is valid python code and can be edited and tested in my favourite python editor or IDE.

Often, it is desired to distribute the presentation as a PDF file. I have a script (ipnb2pdf) that converts Jupyter-RISE notebooks to PDF slides via html using Jupyter-nbconvert and wkhtmltopdf. This basically automates the manual process described in the RISE documentation.

Sometimes, I want to make my presentation using a PDF viewer and not a Jupyter-RISE notebook, but still want to write everything in markdown. If you want to do that, pandoc is your friend, and my small shell script md2beamer runs pandoc and then LaTex to produce a PDF file. Quite often, I want to “weave” the output of python code into my markdown text using pweave. In this situation, my shell script pwv-pandoc gets the job done by running pweave, pandoc and latex in succession.


Aadhaar and signing a blank sheet of paper redux

The Aadhaar abuse that I described a year ago as a hypothetical possibility a year ago has indeed happened in reality. In July 2017, I described the scenario in a blog post as follows:

That is when I realized that the error message that I saw on the employee’s screen was not coming from the Aadhaar system, but from the telecom company’s software. … Let us think about why this is a HUGE problem. Very few people would bother to go through the bodily contortion required to read a screen whose back is turned towards them. An unscrupulous employee could simply get me to authenticate the finger print once again though there was no error and use the second authentication to allot a second SIM card in my name. He could then give me the first SIM card and hand over the second SIM to a terrorist. When that terrorist is finally caught, the SIM that he was using would be traced back to me and my life would be utterly and completely ruined.

Last week, the newspapers carried a PTI report about a case going on in the Delhi High Court about exactly this vulnerability:

The Delhi High Court on Thursday suggested incorporating recommendations, like using OTP authentication instead of biometric, given by two amicus curiae to plug a ‘loophole’ in the Aadhaar verification system that had been misused by a mobile shop owner to issue fresh SIM cards in the name of unwary customers for use in fraudulent activities. The shop owner, during Aadhaar verification of a SIM, used to make the customer give his thumb impression twice by saying it was not properly obtained the first time and the second round of authentication was then used to issue a fresh connection which was handed over to some third party, the high court had earlier noted while initiating a PIL on the issue.

This vindicates what I wrote last year:

Using Aadhaar (India’s biometric authentication system) to verify a person’s identity is relatively secure, but using it to authenticate a transaction is extremely problematic. Every other form of authentication is bound to a specific transaction: I sign a document, I put my thumb impression to a document, I digitally sign a document (or message as the cryptographers prefer to call it). In Aadhaar, I put my thumb (or other finger) on a finger print reading device, and not on the document that I am authenticating. How can anybody establish what I intended to authenticate, and what the service provider intended me to authenticate? Aadhaar authentication ignores the fundamental tenet of authentication that a transaction authentication must be inseparably bound to the document or transaction that it is authenticating. Therefore using Aadhaar to authenticate a transaction is like signing a blank sheet of paper on which the other party can write whatever it wants.

In the sister blog during December 2017 and January 2018

The following posts appeared on the sister blog (on Financial Markets and their Regulation) during December 2017 and January 2018.

Turning an Android phone into a travelling desktop

Installing the software in the phone

I covered this in a post a few months ago:

You can turn your phone into a miniature version of your laptop by installing a desktop Linux distribution inside your Android phone and then installing all your favourite open source software inside that.

In my case, the open source software running inside my phone includes:

This solution works quite well provided it is used sparingly (for example to make a small last minute change to a presentation). However, as one gets used to the power lurking inside the phone, one is tempted to do this more extensively, and the limitations of the phone’s tiny screen and clumsy virtual keyboard become very apparent. In this post, I talk about my attempts to overcome these limitations with the help of other gadgets and peripherals.

Turning the hotel TV into an external display

I find an external display to be a more pressing need than anything else – it is useful whether one is consuming content (for example, reading a pdf file with graphs, diagrams and equations) or creating content (for example, writing this blog post). The obvious solution to the tiny screen problem is to connect the phone to the large flat screen TV that is now present in virtually every hotel room today. But implementing this idea proved non trivial.

Many modern Android phones do not support the MHL or Slimport interfaces and so cannot provide an HDMI output from the USB port. However, almost all Android phones support casting to a TV using Google Chromecast, and so this was the solution that I adopted. Chromecast however has two serious limitations:

  • It needs an internet connection even when casting local content from the phone.
  • It does not connect to the portal based WiFi that is standard in most hotels (it does connect to standard password based WiFi networks used by home routers).

So the Chromecast needs too be supplemented by a portable WiFi router. I use the HooToo TripMate Nano which can act as a WiFi bridge that connects to one WiFi network (say the hotel WiFi) and makes that internet connection available over its own WiFi network. In a hotel room, I first power up the HooToo, connect my phone to the HooToo WiFi network, login to the HooToo admin page and ask it to connect to the hotel WiFi network. The hotel’s login portal then comes up on my phone web browser and I sign it to it. Next, I connect my Chromecast to the HDMI port of the hotel TV and power it up. My Chromecast has been permanently set up to connect to the HooToo WiFi network and so it does so automatically. Now the phone and the Chromecast are connected to the same WiFi network (the HooToo WiFi) which in turn is connected to the internet through the hotel WiFi. The Chromecast now works perfectly, and I ask my phone to mirror/cast its screen to the Chromecast. Now, my phone has a 42 inch (or bigger) display on which I can read anything that is on the phone.

Both the Chromecast and the HooToo need power and I find it convenient to supply this power from a powerbank that has two charging ports. I carry a power bank anyway as an extra power supply for my phone, and by using it I avoid carrying too many chargers/adaptors and hunting for power points (sockets) in the hotel. (When I am travelling outside the country, I carry only one adapter plug and so even if the hotel has lots of power sockets, I may have access to only one because my plugs do not fit these sockets without an adaptor). This whole set (Chromecast, HooToo and power bank) is quite light and compact, and I have gotten used to carrying the set with me whenever I travel.

External keyboard and mouse

Occasionally, I find that the external display is not enough. There are some trips during which I plan to do extensive typing on my phone, and then an external bluetooth keyboard and mouse become useful. Since they are bluetooth devices, they can be used with a wide range of phones, tablets and laptops, and not just an Android phone. They end up being used at home with one device or the other, but these are much bulkier peripherals and I carry them with me during my travel only when I anticipate heavy use. On these occasions (as in the photograph below), my mobile is effectively a desktop with a large screen, comfortable keyboard and mouse.

My phone connected to hotel TV and other peripherals

Why Intel investors should subscribe to the Linux Kernel Mailing List or at least LWN

On January 3 and 4, 2018 (Wednesday and Thursday), the Intel stock price dropped by about 5% amidst massive trading volumes after The Register revealed a major security vulnerability in Intel chips on Tuesday evening (the Meltdown and Spectre bugs were officially disclosed shortly thereafter). But a bombshell had landed on the Linux Kernel on Saturday, and a careful reader would have been able to short the stock when the market opened on Tuesday (after the extended weekend). So, -1 for semi-strong form market efficiency.

Saturday’s post on LWN was very cryptic:

Linus has merged the kernel page-table isolation patch set into the mainline just ahead of the 4.15-rc6 release. This is a fundamental change that was added quite late in the development cycle; it seems a fair guess that 4.15 will have to go to -rc8, at least, before it’s ready for release.

The reason this was a bombshell is that rc6 (release candidate 6) is very late in the release cycle where only minor bug fixes are usually made before release as version 4.15. As little as 10 days earlier, an article on LWN stated that Kernel Page-Table Isolation (KPTI) patch would be merged only into version 4.16 and even that was regarded as rushed. The article stated that many of the core kernel developers have clearly put a lot of time into this work and concluded that:

KPTI, in other words, has all the markings of a security patch being readied under pressure from a deadline.

If merging into 4.16 looked like racing against a deadline, pushing it into 4.15 clearly indicated an emergency. The public still did not know what the bug was that KPTI was guarding against, because security researchers follow a policy of responsible disclosure where public disclosure is delayed during an embargo period which gives time to the key developers (who are informed in advance) to patch their software. But, clearly the bug must be really scary for the core developers to merge the patch into the kernel in such a tearing hurry.

One more critical piece of information had landed on LWN two days before the bombshell. On December 27, a post described a small change that had been made in the KPTI patch:

AMD processors are not subject to the types of attacks that the kernel page table isolation feature protects against. The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault.

Disable page table isolation by default on AMD processors by not setting the X86_BUG_CPU_INSECURE feature, which controls whether X86_FEATURE_PTI is set.

As Linus Torvalds put it a few days later: “not all CPU’s are crap.” Since it was already known that KPTI would degrade the performance of the processor by about 5%, the implication was clear: Intel chips would slow down by 5% relative to AMD after KPTI. In fact, one post on LWN on Monday evening (Note that Jan 2, 2018 0:00 UTC (Tue) would actually be late Monday evening in New York) did mention that trade idea:

Posted Jan 2, 2018 0:00 UTC (Tue) by Felix_the_Mac (guest, #32242)
In reply to: Kernel page-table isolation merged by GhePeU
Parent article: Kernel page-table isolation merged
I guess now would be a good time to buy AMD stock

The stock price chart shows that AMD did start rising on Tuesday, though the big volumes came only on Wednesday and Thursday. The interesting question is why was the smart money not reading the Linux Kernel Mailing List or at least LWN and getting ready for the short Intel, long AMD trade? Were they still recovering from the hangover of the New Year party?

Peripheral vision and non Euclidean Geometry

I came across a recent paper by Google researchers on Introducing a New Foveation Pipeline for Virtual/Mixed Reality

In the human visual system, the fovea centralis allows us to see at high-fidelity in the center of our vision, allowing our brain to pay less attention to things in our peripheral vision. Foveated rendering takes advantage of this characteristic to improve the performance of the rendering engine by reducing the spatial or bit-depth resolution of objects in our peripheral vision. To make this work, the location of the High Acuity (HA) region needs to be updated with eye-tracking to align with eye saccades, which preserves the perception of a constant high-resolution across the field of view.

This reminded me of a paper (“Computer graphics, peripheral vision and non-Euclidian geometry.” Computers & Graphics 16.3 (1992): 253-258) that I wrote 25 years ago which was also based on the distinction between foveal and peripheral vision. That paper was not about virtual reality, but about small computer screens.

Computer graphics is often confronted with the task of providing the viewer with a visual picture of some object which is too large to fit on a computer screen unless the image is scaled down so drastically that much of the detail is lost. The viewer is then asked to work with a partial view of the object, and use a keyboard or a mouse to (a) scroll this image horizontally or vertically, or (b) zoom in or out, or (c) rotate the object.

The computer screen … uses clipping to implement what one might call a “cookie cutter” vision – a small portion of the “cookie” is neatly cut out and given to us. The screen is treated as a window to the “world” – everything visible from this window is displayed at the same resolution, and what is outside is simply cut out.

In the human eye, we find a gradual loss of visual clarity as we move away from the fovea to the periphery; we do not find an abrupt loss of vision at some point. … while concentrating on a small part of the field of vision [the human eye] still retains a hazy view of the peripheral region preventing it from losing sight of the total picture.

This paper argues that the lack of a similar peripheral vision is a major deficiency in computer graphics today. It then goes on to develop a mapping technique which tries to simulate this peripheral vision, and thereby make computer graphics more powerful and versatile. … The suggested mapping is closely related to non Euclidian geometry …

This to my mind, is a very important insight because experimental psychologists established over fifty years ago that perceptual geometry of human vision is in fact strongly non Euclidian – specifically hyperbolic [see, for example, Blank, A.A. “The Luneberg Theory of Binocular Space Perception”, in Koch. S., (ed), Psychology : A Study of a Science, Vol 1, New York, McGraw Hill (1959).] This experimental evidence is at first quite surprising and inexplicable.

Living as we do in a Euclidian world (the relativistic non Euclidian nature of the world is negligible for our purposes), why do we have non Euclidian vision and how do find our way about in the world? Peripheral vision suggests an answer to both questions. We find our way about because for that we rely on our foveal vision which is Euclidian (hyperbolic geometry is locally Euclidian); we never trust our peripheral (non Euclidian) geometry for that. We have hyperbolic vision in a Euclidian world because that is the way to accommodate peripheral vision which is more important for human survival than the niceties of Euclidian geometry.

Computing power has progressed far enough for these variable scaling techniques to be done in real time for videos (and not just still images that I had in mind a quarter century ago). I wish these techniques come into widespread use. Whenever I am navigating using Google maps, I find it frustrating that if I zoom in to see the turnings and intersections, I lose the big picture of where I am in the overall route. Non Euclidean mappings would allow me to zoom in into an intersection while still seeing the big picture (hazily).

In the sister blog during August-November 2017

The following posts appeared on the sister blog (on Financial Markets and their Regulation) during August-November 2017.