Predicting human behaviour is legal, predicting machines is not?

I read this Wired story about some hackers being sent to jail for “hacking” slot machines in US casinos. “Hacking” is probably the wrong word to use for this: they made money by predicting what the slot machine would do by observing it carefully, and using their knowledge of the insecure random number generator used in the software of the slot machines. It appears therefore that it is illegal to predict what a machine would do by figuring out its vulnerabilities and observing its behaviour.

The irony of the matter is that the entire business model of the casinos is built on figuring out the vulnerabilities of the human customers, predicting how they would bet under different situations and designing every minute detail of the casino to exploit these vulnerabilities. The New Yorker had a story five years ago about how a casino was redesigned completely when the customer profile changed from predominantly older male customers to more women:

So Thomas redesigned the room. He created a wall of windows to flood the slot machines with natural light. He threw out the old furniture, replacing it with a palette that he called “garden conservatory” … There are Italian marbles … Bowls of floating orchids are set on tables; stone mosaics frame the walkway; the ceiling is a quilt of gold mirrors. Thomas even bought a collection of antique lotus-flower sculptures

Casinos “monitor the earnings of the gaming machines and tables. If a space isn’t bringing in the expected revenue, then Thomas is often put to work.” The design is optimized using a massive amount of research which can justifiably be called “hacking” the human brain. If you look at the Google Scholar search results for the papers of just one top academic (Karen Finlay) in the field of casino design, you will see that she has studied every conceivable design element to determine what can cause people to bet more:

  • A comparison of ambient casino sound and music: Effects on dissociation and on perceptions of elapsed time while playing slot machines
  • Casino decor effects on gambling emotions and intentions
  • Assessing the contribution of gambling venue design elements to problem gambling behaviour
  • The Influence of Casino Architecture and Structure on Problem Gambling Behaviour
  • Measuring the Effects of Pictorial and Text Messages on Memory and Gambling Intentions Within a Casino Environment
  • The Effect of Visual Stimuli in Casinos on Emotional Responses and Problem Gambling Behavior
  • The Effect of Match and Mismatch Between Trait and State Emotion on At-Risk Gambling
  • Effects of slot machine characteristics on problem gambling behaviour

The more recent studies on human behaviour are done using a panoscope which:

features networked immersive displays where individuals are absorbed in an environment (12 feet in diameter) that surrounds them on a 360-degree basis. … Use of these panels creates a totally immersive life-like experience and facilitates the delivery of these manipulations. (Finlay-Gough, Karen, et al. “The Influence of Casino Architecture and Structure on Problem Gambling Behaviour: An Examination Using Virtual Reality Technology.” ECRM2015-Proceedings of the 14th European Conference on Research Methods 2015: ECRM 2015. Academic Conferences Limited, 2015.)

I do not see how this kind of attempt to fathom the workings of the human mind is much different from the hackers buying scrapped slot machines and figuring out how they work.

The better way to think about what is going on is to view it as a bad case of regulatory capture. The Wired story says that “Government regulators, such as the Missouri Gaming Commission, vet the integrity of each algorithm before casinos can deploy it.” The sensible thing to do is for the regulators to decertify these algorithms because the random number generators are not secure and force the casinos to use cryptographically secure random number generators. The casinos do not want to spend the money to change these slot machines and the captured regulators let them run these machines, while taxpayer money is expended chasing the hackers.

Perhaps, we should be less worried about what the hackers have done than about what the casinos are doing. Unlike the vulnerabilities in the slot machines, the vulnerabilities in the human brain cannot be fixed by a software update. Yet hacking the human brain is apparently completely legal, and it is not only the casinos which are doing this. Probably half of the finance industry is based on the same principles.

Setting up a Raspberry Pi as a home file server

Since tastes and needs differ widely within our family, the laptops and devices at my home run Linux, Windows 10, Android and iOS. Sharing data between devices running diverse operating systems is a huge pain. We often end up using email, Dropbox or Google Drive, though it is utterly silly to move a file half way round the world to share it with somebody within arm’s reach.

This is where the Raspberry Pi comes in. It is cheap enough to be within the budget of almost anybody who runs a WiFi network at home, and it is perfectly capable of running a file server accessible from Windows, Linux, Android and iOS. This post describes how I set it up for this purpose using Arch Linux. The Raspberry Pi (RPi for short) can run many different operating systems, but I chose Arch Linux ARM because it is quite powerful and also because I run Arch Linux on my laptop.

Prerequisites

  • RPi starter kit. At a minimum, we need:
    • Raspberry Pi 3 (If you use a RPi2, you will need a USB WiFi adapter)
    • Micro USB Power Supply
    • 16 GB SD card (8 GB might be adequate) and
    • LAN cable.

      Neither a keyboard nor a display are needed as the RPi will run headless.

  • An external USB hard disk of adequate capacity (say 1TB or more) with an independent power supply. Alternatively, a powered USB hub can be used to connect a USB hard disk without independent power supply.
  • A WiFi router (administrative rights are needed)
  • A computer running Linux to setup the SD card
  • Optionally, a music system and an unused mobile phone to create a networked music system.

Continue Reading

The blockchain as an ERP for a whole industry

In the eight years since Satoshi Nakomoto created Bitcoin, there has been a lot of interest in applying the underlying technology, the blockchain, to other problems in finance. The blockchain or the Distributed Ledger Technology (DLT) as it is often called brings benefits like Byzantine fault tolerance, disintermediation of trusted third parties and resilience to cyber threats.

Gradually, however, the technology has moved from the geeks to the suits. In the crypto-currency world itself, this evolution is evident: Bitcoin was and is highly geek heavy; Etherium is an (unstable?) balance of geeks and suits; Ripple is quite suit heavy. History suggests that the suits will ultimately succeed in repurposing any technology to serve establishment needs however anarchist its its original goals might have been. One establishment need that the blockchain can serve very well is the growing need for an industry-wide ERP.

ERP (enterprise resource planning) software tries to integrate the management of all major business processes in an enterprise. At its core is a common database that provides a single version of the truth in real time throughout the organization cutting across departmental boundaries. The ERP system uses a DBMS (database management system) to manage this single version of the truth. The blockchain is very similar: it is a real time common database that provides a single version of the truth to all participants in an industry cutting across organizational boundaries.

To understand why and how the blockchain may gain adoption, it is therefore useful to understand why many large organizations end up adopting an ERP system despite its high cost and complexity. The ERP typically replaces a bunch of much cheaper department level software, and my guess is that an ERP deployment would struggle to meet a ROI (return on investment) criterion because of its huge investment of effort, money and top management time. The logical question is why not harmonize the pre-existing pieces of software instead? For example, if marketing is using an invoicing software and accounting needs this data to account for the sales, all that is really needed is for the accounting software to accept data from the marketing software and use it. The reason this solution does not work boils down to organizational politics. In the first place, the accounting and marketing departments do not typically trust each other. Second, marketing would insist on providing the data in their preferred format and argue that accounting can surely read this and convert it into their internal format. Accounting would of course argue that marketing should instead give the data in the accountant’s preferred format which is so obviously superior. Faced with the task of arbitrating between them, the natural response of top management is to adopt a “plague on both houses” solution and ask both departments to scrap their existing software and adopt a new ERP system.

It is easy to see this dynamic playing out with the blockchain as well. There is a need for a single version of the truth across all organizations involved in many complex processes. Clearly, organizations do not trust each other and no organization would like to accept the formats, standards and processes of another organization. It is a lot easier for everybody to adopt a neutral solution like the blockchain.

A key insight from this analysis is that for widespread adoption of blockchain to happen, it is not at all necessary that the blockchain be cheaper, faster or more efficient. It will not be subjected to an ROI test, but will be justified on strategic grounds like resilience to cyber threats and Byzantine actors.

The only thing that worries me is that the suits are now increasingly in charge, and cryptography is genuinely hard. As Arnold Kling says: “Suits with low geek quotients are dangerous”.

In the sister blog during September-December 2016

The following posts appeared on the sister blog (on Financial Markets and their Regulation) during September-December 2016.

Moving to a tiling window manager

Over two blog posts, I have described my journey towards minimalism – from Windows to Ubuntu at the beginning of the decade, then to Xubuntu over two years back, and onward to Arch Linux a year ago. While moving to Arch Linux, I abandoned desktop environments in favour of a relatively minimalist window manager (Openbox). I chose Openbox because I wanted something minimalist, but was not ready for a tiling window manager.

A couple of months back, I decided to explore tiling window managers which give much greater control over the placement of windows and are also more keyboard friendly. Many years ago, when I was dissatisfied with how often stacking window managers place new windows at arbitrary locations and size them inappropriately, I had discovered wmctrl which is a command line tool to activate, close, move, resize, maximize and minimize windows. By assigning hotkeys to suitable wmctrl commands, I could very quickly move windows to desired locations and sizes (for example, full height, flush left and filling two-thirds of the screen). Most often, I operated with a couple of windows that fully covered the screen. Slowly, I realized that I was using wmctrl to turn a stacking window manager into a tiling manager, and so it might make more sense to use a tiling manager.

I began my exploration of tiling window managers with awesome, but its out-of-the-box behaviour did not suit me at all, and so I decided to try i3. Trying these two in quick succession turned out to be a good idea because awesome and i3 have very different philosophies and default behaviours. By experiencing both of them, I could get a good sense of how to configure a tiling window manager to suit my needs. The decision to go back to awesome was driven by the perception that it could be customized more thoroughly by writing appropriate lua code.

My customization of awesome included the following:
Continue Reading

OAuth2 authentication for offline email clients

More than a year ago, in my first post on this blog, I described my head in the cloud, feet on the ground strategy for offline email access. At that time, my solution (based on offlineimap) required me to store my email password in my Gnome keyring. This is far from satisfactory because as I explained in my blog post on using encryption, I do not like to keep important passwords in the Gnome Keyring. For that, I use a KeePass password file in an encrypted file system. This means that I need three passwords to get to access my important passwords: first to login to the computer, second to mount my encrypted volume and third to open the password manager. On the other hand, I need only my login password to get to the less important passwords sitting in the Gnome Keyring. In my view, email passwords are among the most important ones, and it unfortunate that to use offlineimap, I had to store this critical stuff in the Gnome Keyring.

The solution is to use OAuth2. In the last year or so, offlineimap has acquired the capability to use OAuth2, and now I have completed my migration to this method. As part of this process, I sat down and read the official document on OAuth 2.0 Threat Model and Security Considerations. That made me uncomfortable with the suggested approach in offlineimap (and many other software as well) of storing the OAuth2 refresh token in plain text in the configuration file. It might be acceptable if the home partition is encrypted, but as I explained in my Using Encryption post, that is not how my laptop is set up. I therefore came up with the idea of storing the refresh token in the Gnome Keyring. Since it is possible to use arbitrary python code for almost all settings in the offlineimap configuration file, this is easy.
Continue Reading

Migrating from R to Python

Many years ago, I shifted from Microsoft Excel and LibreOffice Calc to R data frames as my primary spreadsheet tool. This was one of the earliest steps in my ongoing move from bloat to minimalism (see my three blog posts on this process). Shifting to R yielded many benefits:

  • Greater readability and maintainability
  • Version control
  • Reusable code
  • Dynamic generation of reports and presentations from computed data using LaTex and knitr
  • Production quality graphics and charts using plain R graphics and more importantly ggplot
  • Access to a comprehensive library of statistical and quantitative finance tools written in R

Over the last few months, I have been shifting from R to Python for most of my work. The primary reason for making this change is that Python is a full fledged programming language unlike R which is primarily a statistical language which has been extended to do a lot of other things. A few years ago (when I first shifted to R), Python was totally unsuitable for use as a spreadsheet because the language was primarily designed to work with scalars rather than vectors and matrices. But in recent years, the Python tool sets (NumPy, SciPy, pandas, matplotlib, statsmodels, scikit-learn) have developed rapidly and now goes beyond the capabilities of R in many respects. Jake VanderPlas’s keynote talk at the Scipy 2015 Conference is an excellent introduction to this entire set of tools. Overall, I am very happy with the pandas implementation of data frames based on NumPy arrays; the best features of R have been preserved.
Continue Reading