How To Cheat Lottery Machine
I report an experiment to see whether a high quality system specification can also be produced by a large number of people working in parallel with a minimum of communication. A lottery machine is the machine used to draw the winning numbers for a lottery. Early lotteries were done by drawing numbers, or winning tickets, from a container. In the UK, numbers of winning Premium Bonds (which were not strictly a lottery, but very similar in approach) were generated by an.
- How To Cheat Lottery Machine App
- How To Cheat Jackpot Machine
- How To Cheat Lottery Machine Learning
- How To Cheat Lottery Machine Games
- How To Hack Lottery Machine
(or, Massively Parallel Requirements Engineering)
University of Cambridge Computer Laboratory
New Museums Site, Pembroke Street, Cambridge CB2 3QG, UK
[email protected]
Abstract:
Collaborative software projects such as Linux and Apache have shownthat a large, complex system can be built and maintained by manydevelopers working in a highly parallel, relatively unstructured way.In this note, I report an experiment to see whether a high qualitysystem specification can also be produced by a large number of peopleworking in parallel with a minimum of communication.
(This paper appeared as an invited talk at the 1999 Computer Security Applications Conference. You can download pdfs of the a two columnversion that appeared in the proceedings, and if your prefer larger type the original singlecolumn version.)
Experienced software engineers know that perhaps 30% of the cost of asoftware product goes into specifying it, 10% into coding, and theremaining 60% on maintenance. This has profound effects on computerscience. For example, when designing new programming languages themotive nowadays is mostly not to make coding easier, but to cut thecosts of maintenance. There has also been massive interest in opensource software products such as Linux and Apache, whose maintenanceis undertaken by thousands of programmers working worldwide in avoluntary and cooperative way.
Open source software is not entirely a recent invention; in the earlydays of computing most system software vendors published their sourcecode. This openness started to recede in the early 1980s when pressureof litigation led IBM to adopt an `object-code-only' policy for itsmainframe software, despite bitter criticism from its usercommunity. The pendulum now seems to be swinging back, with Linux andApache gaining huge market share.
In his influential paper `The Cathedral and the Bazaar' [1],Eric Raymond compares the hierarchical organisation of large softwareprojects in industry (`the cathedral') with the more open,unstructured approach of cooperative developers (`the bazaar'). Hemakes a number of telling observations about the efficiency of thelatter, such as that ``Given enough eyeballs, all bugs are shallow'.His more recent paper, `The Magic Cauldron' [2], explores theeconomic incentives that for-profit publishers have found to publishtheir source code, and concludes that IBM's critics were right: wherereliability is paramount, open source is best, as users will cooperatein finding and removing bugs.
There is a corollary to this argument, which I explore in this paper:the next priority after cutting the costs of maintenance should becutting the costs of specification.
Specification is not only the second most expensive item in the systemdevelopment life cycle, but is also where the most expensive things gowrong. The seminal study by Curtis, Krasner and Iscoe of largesoftware project disasters found that failure to understand therequirements was mostly to blame [3]: a thin spread ofapplication domain knowledge typically led to fluctuating andconflicting requirements which in turn caused a breakdown incommunication. They suggested that the solution was to find an`exceptional designer' with a deep understanding of the problem whowould assume overall responsibility.
But there are many cases where an established expert is not available,such as when designing a new application from scratch or when buildinga competitor to a closed, proprietary system whose behaviour can onlybe observed at a distance.
There are also some particular domains in which specification is wellknown to be hard. Security is one example; the literature has manyexamples of systems which protected the wrong thing, or protected theright thing but using the wrong mechanisms. Most real life securityfailures result from the opportunistic exploitation of elementarydesign flaws rather than `high-tech' attacks such ascryptanalysis [4]. The list of possible attacks on a typicalsystem is long, and people doing initial security designs are verylikely to overlook some of them. Even in a closed environment, the useof multiple independent experts is recommended [5].
Security conspicuously satisfies the five tests which Raymondsuggested would identify the products most likely to benefit from anopen source approach [2]. It is based on common engineeringknowledge rather than proprietary techniques; it is sensitive tofailure; it needs peer review for verification; it is businesscritical; and its economics include strong network effects. Its owntraditional wisdom, going back at least to Auguste Kerkhoffs in 1883,is that cryptographic systems should be designed in such a way thatthey are not compromised if the opponent learns the technique beingused. In other words, the security should reside in the choice of keyrather than in obscure design features [6].
It therefore seemed worthwhile to see if a high quality securityspecification could be designed in a highly parallel way, by getting alot of different people to contribute drafts in the hope that most ofthe possible attacks would be considered in at least one of them.
The opportunity to test this idea was provided by the fact that Iteach courses in cryptography and computer security to second andthird year undergraduates at Cambridge. By the third year, studentsshould be able to analyse a protection problem systematically bylisting the threats, devising a security policy and then recommendingmechanisms that will enforce it. (The syllabus and lecture notes areavailable online at [7].)
By a security policy, we mean a high level specification which setsout the threats to which a system is assumed to be exposed and theassurance properties which are to be provided in response. Like mostspecifications, it is a means of communication between the users (whounderstand the environment) and the system engineers (who will have toimplement the encryption, access control, logging or other mechanisms). So it must be clearly comprehensible to both communities; it should also be concise.
The students see, as textbook examples of security policy:
- the Bell-LaPadula model, which is commonly used by governmentsto protect classified information and which states that informationcan only flow up the classification hierarchy, and never down. Thus acivil servant cleared to `Secret' can read files at `Secret' or below,but not `Top Secret', while a process running at `Secret' can write atthe same level or above, but never down to `Unclassified';
- The Clark-Wilson model, which provides a reasonably formaldescription of the double-entry bookkeeping systems used by largeorganisations to detect fraud by insiders;
- The Chinese Wall model, which models conflicts of interest inprofessional practice. Thus an advertising account executive who hasworked on one bank's strategy will be prevented from seeing the fileson any other banking client for a fixed period of time afterwards;
- The British Medical Association model, which describes how flowsof personal health information must be restricted so as to respect theestablished ethical norms for patient privacy. Only people involveddirectly in a patient's care should be allowed to access their medicalrecords, unless the patient gives consent or the records arede-identified effectively.
The first three of these are documented in [8] and the fourthin [9]. Further examples of security policy models are alwayswelcome, as they help teach the lesson that `security' means radicallydifferent things in different applications. However, developing asecurity policy is usually hard work, involving extensive consultationwith domain experts and successive refinement until a model emergesthat is compact, concise and agreed by all parties.
Exceptions include designing a policy for a new application, and for acompetitor to a closed system. In such cases, the best we can do maybe to think long and hard, and hope that we will not miss anythingimportant.
I therefore set the following exam question to my third year students:
You have been hired by a company which is bidding to take over the National Lottery when Camelot's franchise expires, and your responsibility is the security policy. State the security policy you would recommend and outline the mechanisms you would implement to enforce it.
For the benefit of overseas readers, I will now give a simplifieddescription of our national lottery. (British readers can skip thenext two paragraphs.)
The UK's national lottery is operated by a consortium of companiescalled Camelot which holds a seven year licence from the government.This licence is up for renewal, which makes the question topical; andpresumably Camelot will refuse to share its experience with potentialcompetitors. A large number of franchised retail outlets sell tickets. The customer marks six out of 49 numbers on a form which he hands withhis money to the operator; she passes it through a machine that scans it and prints a ticket containing the choice of numbers plus somefurther coded information to authenticate it.
Twice a week there is a draw on TV at which a machine selects sevennumbered balls from 49 in a drum. The customers who have predicted thefirst six share a jackpot of several million pounds; the odds shouldbe (49 choose 6) or 13,983,816 to one against, meaning that with muchof the population playing there are several winners in a typical draw.(Occasionally there are no winners and the jackpot is `rolled over' tothe next draw, giving a pot of many millions of pounds which whipsthe popular press to a frenzy.) There are also smaller cash prizes forpeople who guessed only some of the numbers. Half the takings go onprize money; the other half gets shared between Camelot, the taxmanand various charitable good causes.
The model answer I had prepared had a primary threat model thatattackers, possibly in cahoots with insiders, would try to place betsonce the result of the draw is known, whether by altering bet recordsor forging tickets. The secondary threats were that bets would beplaced that had not been paid for, and that attackers might operatebogus vending stations which would pay small claims but disappear if aclient won a big prize.
The security policy that follows logically from this is that betsshould be registered online with a server which is secured prior tothe draw, both against tampering and against the extraction ofsufficient information to forge a winning ticket; that there should becredit limits for genuine vendors; and that there should be ways ofidentifying bogus vendors. Once the security policy has been developedin enough detail, designing enforcement mechanisms should not be toohard for someone skilled in the art - though there are somesubtleties, as we shall see below.
The exam was set on the first of June 1999 [10], and when thescripts were delivered that evening, I was eager to find out what thestudents might have come up with.
Thirty four candidates answered the question, and five of their paperswere good enough to be kept as model answers. All of these candidateshad original ideas which are incorporated in this paper, as did afurther seven candidates whose answers were less complete. As the exammarking is anonymous, the `co-authors' of this specification are asubset of the candidates listed in the ackowledgements below. Thequestion was a `good' one in that it divided the students up aboutequally into first, second and third class ranges of marks. Almostall the original ideas came from the first class candidates.
The contributions came at a number of levels, including policy goalstatements, discussions of particular attacks, and arguments about themerits of particular protection mechanisms.
Policy goal statements
On sorting out the high level policy statements from the more detailedcontributions, the first thing to catch the eye was a conflictreminiscent of the old debate over who should pay when a `phantomwithdrawal' happens via an automatic teller machine - the customer orthe bank [4].
One of the candidates assumed that the customer's rights must haveprecedence: `All winning tickets must be redeemable! So failuresmust not allow unregistered tickets to be printed.' Another candidateassumed the contrary, and thus the `worst outcome should be thatthe jackpot gets paid to the wrong person, never twice.' Ultimately,whether systems fail in the shop's favour or the customer's is aregulatory issue. However, there are consequences for security. In thecontext of cash machine disputes, it was noted that if the customercarries the risk of fraud while only the bank is in a position toimprove the security measures, then the bank may get more and morecareless until an epidemic of fraud takes place. We presumably want toavoid this kind of `moral hazard' in a national lottery; perhaps thesolution is for disputed sums to be added back to the prize fund, ordistributed to the `good causes'.
As well as protecting the system from fraud, the operator must alsoconvince the gaming public of this. This was expressed in variousways: `take care how you justify your operations;'`don'tforget the indirect costs of security failure such as TV contractpenalties, ticket refund, and publicity of failure leading to bogusclaims;'`at all costs ensure that there is enough backup toprevent unverifiable ticket problems.' The operator can get someprotection by signs such as `no winnings due unless entrylogged' but this cover is never total.
Next, a number of candidates argued that it was foolish to place solereliance on any single protection mechanism, or any single instance ofa particular type of mechanisms. A typical statement was: `Don'tbet the farm on tamper-resistance'. For example, if the main threatis someone forging a winning ticket after tapping the network whichthe central server uses to send ticket authenticator codes to vendingmachines, we might not just encrypt the line but also delay payingjackpots for several days to give all winners a chance to claim.(Simply encrypting the authentication codes would not be enough, if atechnician who dismantled the encryption device at the server couldget both the authentication keys and the encryption keys.) Translatedinto methodology, this suggests a security matrix approach which mapsthe threats to the protection mechanisms, and makes it easy for us tocheck that at least two independent mechanisms constrain every seriousthreat.
Various attempts were made to reuse existing security policies, andparticularly Clark-Wilson. These were mostly by weak candidates andnot very convincing. But three candidates did get some mileage; forexample, one can model the lottery terminal as a device that turns anunconstrained data item (the customer selection) into a constraineddata item (the valid lottery ticket) by registering it and printing anauthentication code on it. Such concepts can be useful in designingseparation-of-duty mechanisms for ticket redemption and generalfinancial control, but do not seem to be enough to cover all the noveland interesting security problems which a lottery provides.
Some candidates wondered whether a new franchisee would want to extendthe existing lottery's business model, such as by allowing people tobuy tickets over the phone or the net. In that case, one should try todesign the policy to be extensible to non-material sales channels.(Internet based lottery ticket sales have since been declared to be a good thing by the government [11].)
Finally, some attention needs to be paid to protecting genuinewinners. The obvious issue is safeguarding the privacy of winners whorefuse publicity; less obvious issues include the risk that winnersmight be traced, robbed and perhaps even murdered during the claimprocess. For example, the UK has some recent history of telephonetechnicians abusing their access to win airline tickets and otherprizes offered during phone-in competitions; one might be concernedabout the risk that a technician, in cahoots with organised crime,would divert the winners' hotline, intercept a jackpot claim, anddispatch a hit squad to collect the ticket. In practice, measures tocontrol this risk are likely to involve the phone company as much asthe lottery itself.
Discussions of particular attacks
This leads to a discussion of attacks. There were several views on howthe threat model should be organised; one succinct statement was `Any attack that can be done by an outsider can be done at least aswell by an insider. So concentrate on insider attacks'. This issomething that almost everyone knows, but which many system designersdisregard in practice. Other candidates pointed out that no system candefend itself against being owned by a corrupt organisation, and thatsenior insiders should be watched with particular care.
Moving now to the more technical analysis, a number of interestingattack scenarios were explored.
How To Cheat Lottery Machine App
There are some secondary design concerns here. How will the machinesvalidate the lower-value tickets that are paid out locally - onlyonline? Or will some of the authenticator code be kept in the vendingstation? But in that case, how do we cope with the accidental ormalicious destruction of the machine that sold a jackpot winningticket, and how do we pay small winnings when the machine that soldthe ticket is offline?
Reasoning about particular protection mechanisms
The third type of contribution from the candidates can be roughlyclassed as reasoning about particular mechanisms.
- what will be the controls on adding vendingmachines to network (and for that matter adding servers);
- how long should logs be kept;
- how to deal with refunded tickets;
- how to deal with tickets that are registered but not printed(these will exist if you insist that unregistered tickets are neverprinted);
- what system will be used to transfer takings from merchants tothe operator (we don't want a fake server to be able to collect realmoney);
- what audit requirements the taxman will impose;
- what sort of `intrusion detection' or statistical monitoringsystem will be incorporated to catch the bugs and/or attacks that weforgot about or which crept in during the implementation. E.g., wemight have a weird bug which enables a shopkeeper to manufacture occasional medium-sized winners which he credits against hisaccount. If this is significant, it should turn up in long termstatistical analysis.
As we work through these details, it becomes clear that for most ofthe system, `Trusted' means not just tamper resistant but subject toapproved audit and batch control mechanisms.
How To Cheat Jackpot Machine
How complete are the above lists?
At the time I set the exam question, I had never played the lottery.I did not perform this experiment until after marking the examscripts; this helped ensure an even playing field for the candidates.In fact, by the time I got round to buying a ticket, I had alreadywritten the first draft of this article and circulated it tocolleagues. My description of the ticket purchase process in thatdraft had been based on casual observation of people ahead of me inPost Office queues, and was wrong in an unimportant but noticeabledetail: I had assumed that the authentication code was printed on theform filled by the customer whereas in fact it appears on the receipt(which I have therefore called `the ticket' in this version of thepaper). None of my colleagues noticed, and none of them has sinceadmitted to having ever played. Indeed, only one of the candidatesshows any sign of having done so. I had expected a negativecorrelation between education and lottery participation (many churchesalready denounce the lottery as a regressive tax on the poor, the weakand the less educated) but the strength of this correlation surprisedme.
So the above security analysis was done essentially blind - that is,without looking at the existing system. Subsequent observation of theprocedures actually implemented by Camelot suggests only two furtherissues.
- 2.
- Secondly, the tickets are numbered as suggested in 4.3.4, butprinted on continuous stock. The selected bet numbers andauthentication codes are printed on the front, while pre-printedserial numbers appear on the back. This may have both advantages anddisadvantages. If a standard retail receipt printer is used, it canproduce a paper audit roll with a copy of all tickets printed. Thismay well be more convincing to a judge than any cryptographicprotection for electronic logs. On the other hand, the audit rollmight facilitate ticket forgery as in 4.2.7, and there may besynchronisation problems (the sample ticket I purchased has twosuccessive serial numbers on the back). When synchronising ticketswith serial numbers, one will have to consider everything from ticketrefunds to how operators will initialise a new roll of paper in theticket printer, and what sort of mistakes they will make.
The final drafting of the threat model, security policy and detailed functional design is now left as an exercise to the reader.
Linux and Apache prove that software maintenance can be done inparallel; the experiment reported in this paper shows thatrequirements engineering can too.
There has been collaborative specification development before, as withthe `set-discuss' mailing list used to gather feedback during thedevelopment of the SET protocol for electronic payments. However, suchmechanisms tend to have been rather ad-hoc, and limited to debugging aspecification that was substantially completed in advance by a singleteam. The contribution of this paper is twofold: to show that it ispossible to parallelise right from the start of the exercise, and toillustrate how much value one can add in a remarkably short period oftime. Our approach is a kind of structured brainstorming, and where acomplete specification is required for a new kind of system to a verytight deadline, it looks unbeatable: it produced high quality input atevery level from policy through threat analysis to technical designdetail.
The bottleneck is the labour required to edit the contributions intoshape. In the case of this paper, the time I spent marking scripts,then rereading them, thinking about them and drafting the paper wasabout five working days. A system specification would usually needless polishing than a paper aimed at publication, but the time savedwould have been spent on other activities such as doing a formalmatrix analysis of threats and protection mechanisms, and finalisingthe functional design.
Finally, there is an interesting parallel with testing. It is knownthat different testers find the same bugs at different rates - evenif Alice and Bob are equally productive on average, a bug that Alicefinds after half an hour will only be spotted by Bob after severaldays, and vice versa. This is because different people have differentareas of focus in the testing space. The consequence is that it isoften cheaper to do testing in parallel rather than series, as theaverage time spent finding each bug goes down [14]. Theexercise reported in this paper strongly supports the notion that thesame economics apply to requirements engineering too. Rather thanpaying a single consultant to think about a problem for twenty days,it will often be more efficient to pay fifteen consultants to thinkabout it for a day each and then have an editor spend a week hammeringtheir ideas into a single coherent document.
I am grateful to the security group at Cambridge, and in particular toFrank Stajano, for a number of discussions. also thank JR Rao of IBMfor the history of the `object code only' effort, and Karen SpärckJones who highlighted those parts of the first draft that assumed toomuch knowledge of computer security for a general engineeringaudience, and also persuaded me to buy a ticket.
How To Cheat Lottery Machine Learning
Finally, the students who contributed many of the ideas described herewere an anonymous subset of our third year undergraduates for 1998-9,who were:
PP Adams, MSD Ashdown, JJ Askew, T Balopoulos, KEBebbington, AR Beresford, TJ Blake, NJ Boultbee, DL Bowman, SE Boxall,G Briggs, AJ Brunning, JR Bulpin, B Chalmers, IW Chaudhry, MH Choi, IClark, MR Cobley, DP Crowhurst, AES Curran, SP Davey, AJB Evans, MJFairhurst, JK Fawcett, KA Fraser, PS Gardiner, ADOF Gregorio, RGHague, JD Hall, P Hari Ram, DA Harris, WF Harris, T Honohan, MTHuckvale, T Huynh, NJ Jacob, APC Jones, SR King, AM Krakauer, RC Lamb,RJP Lancaster, CK Lee, PR Lee, TY Leung, JC Lim, MS Lloyd, TH Lynn, BRMansell, DH Mansell, AD McDonald, NG McDonnell, CJ McNulty, RDMerrifield, JT Nevins, TM Oinn, C Pat Fong, AJ Pearce, SW Plummer, CReed, DJ Scott, AA Serjantov, RW Sharp, DJ Sheridan, MA Slyman, ABSwaine, RJ Taylor, ME Thorpe, BT Waine, MR Watkins, MJ Wharton, EYoung, HJ Young, WR Younger, W Zhu.
Bibliography
How To Cheat Lottery Machine Games
(or, Massively Parallel Requirements Engineering)
This document was generated using theLaTeX2HTML translator Version 98.1p1 release (March 2nd, 1998)
Copyright © 1993, 1994, 1995, 1996, 1997,Nikos Drakos, Computer Based Learning Unit, University of Leeds.
The command line arguments were:
latex2html-split 0 lottery.tex.
The translation was initiated by Ross Anderson on 1999-08-17
Footnotes
- Appointing the members ofthe committees that dish out the money is a source of vast patronagefor the Prime Minister and, according to cynics, is the real reasonfor the Lottery to exist.
- One ofthe companies that originally made up the Camelot consortium had toleave after its chief executive was found by the High Court to havetried to bribe a competing consortium during the bidding for theoriginal lottery franchise.
- but see the section on the problems of redundancy
- For the benefit of readers without a securitybackground, a MAC - or message authentication code - is acryptographic checksum computed on data using a secret key and whichcan only be verified by principals who also possess that secretkey. By comparison, a digital signature can in principle be verifiedby anybody. See [8] for more detail
- the set of hardware, software and procedural componentswhose failure could lead to a compromise of the security policy
Ross Anderson