## Wine recognition dataset (Nearest-Neighbor Machine Learning Bakeoff)

Brief information from the UC Irvine Machine Learning Repository:

• Donated by Stefan Aeberhard
• Using chemical analysis determine the origin of wines
• 13 attributes (all continuous), 3 classes, no missing values
• 178 instances
• Ftp Access
1. Title of Database: Wine recognition data
2. Sources:
(a) Forina, M. et al, PARVUS - An Extendible Package for Data
Exploration, Classification and Correlation. Institute of Pharmaceutical
and Food Analysis and Technologies, Via Brigata Salerno,
16147 Genoa, Italy.

(b) Stefan Aeberhard, email: stefan@coral.cs.jcu.edu.au
(c) July 1991
3. Past Usage:

(1)
S. Aeberhard, D. Coomans and O. de Vel,
Comparison of Classifiers in High Dimensional Settings,
Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
(Also submitted to Technometrics).

The data was used with many others for comparing various
classifiers. The classes are separable, though only RDA
has achieved 100% correct classification.
(RDA : 100%, QDA 99.4%, LDA 98.9%, 1NN 96.1% (z-transformed data))
(All results using the leave-one-out technique)

In a classification context, this is a well posed problem
with "well behaved" class structures. A good data set
for first testing of a new classifier, but not very
challenging.

(2)
S. Aeberhard, D. Coomans and O. de Vel,
"THE CLASSIFICATION PERFORMANCE OF RDA"
Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
(Also submitted to Journal of Chemometrics).

Here, the data was used to illustrate the superior performance of
the use of a new appreciation function with RDA.

4. Relevant Information:

-- These data are the results of a chemical analysis of
wines grown in the same region in Italy but derived from three
different cultivars.
The analysis determined the quantities of 13 constituents
found in each of the three types of wines.

-- I think that the initial data set had around 30 variables, but
for some reason I only have the 13 dimensional version.
I had a list of what the 30 or so variables were, but a.)
I lost it, and b.), I would not know which 13 variables
are included in the set.

5. Number of Instances

class 1 59
class 2 71
class 3 48

6. Number of Attributes

13

7. For Each Attribute:

All attributes are continuous

No statistics available, but suggest to standardise
variables for certain uses (e.g. for us with classifiers
which are NOT scale invariant)

NOTE: 1st attribute is class identifier (1-3)

8. Missing Attribute Values:

None

9. Class Distribution: number of instances per class

class 1 59
class 2 71
class 3 48

Further information from one of the authors of the wine database:

Date: Mon, 9 Mar 1998 15:44:07 GMT
From: riclea@crazy.anchem.unige.it (Riccardo Leardi)
Subject: wines

Dear Michael,
I'm one of the authors of PARVUS.
I saw your site and the reference to the data set wines.
Therefore, I think it could be interesting for you to know the names of the
variables (from the 27 original ones):

1) Alcohol
2) Malic acid
3) Ash
4) Alcalinity of ash
5) Magnesium
6) Total phenols
7) Flavanoids
8) Nonflavanoid phenols
9) Proanthocyanins
10)Color intensity
11)Hue
12)OD280/OD315 of diluted wines
13)Proline

If you are interested, I can send you by snail mail the original paper

Best regards
Riccardo Leardi

## BUPA liver disorders dataset (Nearest-Neighbor Machine Learning Bakeoff)

Brief information from the UC Irvine Machine Learning Repository:

• BUPA Medical Research Ltd. database donated by Richard S. Forsyth
• 7 numeric-valued attributes
• 345 instances (male patients)
• Includes cost data (donated by Peter Turney)
• Ftp Access
1. Title: BUPA liver disorders

2. Source information:
-- Creators: BUPA Medical Research Ltd.
-- Donor: Richard S. Forsyth
8 Grosvenor Avenue
Mapperley Park
Nottingham NG3 5DX
0602-621676
-- Date: 5/15/1990

3. Past usage:
-- None known other than what is shown in the PC/BEAGLE User's Guide
(written by Richard S. Forsyth).

4. Relevant information:
-- The first 5 variables are all blood tests which are thought
to be sensitive to liver disorders that might arise from
excessive alcohol consumption.  Each line in the bupa.data file
constitutes the record of a single male individual.
-- It appears that drinks>5 is some sort of a selector on this database.
See the PC/BEAGLE User's Guide for more information.

5. Number of instances: 345

6. Number of attributes: 7 overall

7. Attribute information:
1. mcv	mean corpuscular volume
2. alkphos	alkaline phosphotase
3. sgpt	alamine aminotransferase
4. sgot 	aspartate aminotransferase
5. gammagt	gamma-glutamyl transpeptidase
6. drinks	number of half-pint equivalents of alcoholic beverages
drunk per day
7. selector  field used to split data into two sets

8. Missing values: none

## beast-cancer-wisconsin dataset (Nearest-Neighbor Machine Learning Bakeoff)

Brief information from the UC Irvine Machine Learning Repository:

• Donated by Olvi Mangasarian
• Located in breast-cancer-wisconsin sub-directory, filenames root: breast-cancer-wisconsin
• Currently contains 699 instances
• 2 classes (malignant and benign)
• 9 integer-valued attributes
• Ftp Access

Citation Request:
This breast cancer databases was obtained from the University of Wisconsin
Hospitals, Madison from Dr. William H. Wolberg.  If you publish results
when using this database, then please include this information in your
acknowledgements.  Also, please cite one or more of:

1. O. L. Mangasarian and W. H. Wolberg: "Cancer diagnosis via linear
programming", SIAM News, Volume 23, Number 5, September 1990, pp 1 & 18.

2. William H. Wolberg and O.L. Mangasarian: "Multisurface method of
pattern separation for medical diagnosis applied to breast cytology",
Proceedings of the National Academy of Sciences, U.S.A., Volume 87,
December 1990, pp 9193-9196.

3. O. L. Mangasarian, R. Setiono, and W.H. Wolberg: "Pattern recognition
via linear programming: Theory and application to medical diagnosis",
in: "Large-scale numerical optimization", Thomas F. Coleman and Yuying
Li, editors, SIAM Publications, Philadelphia 1990, pp 22-30.

4. K. P. Bennett & O. L. Mangasarian: "Robust linear programming
discrimination of two linearly inseparable sets", Optimization Methods
and Software 1, 1992, 23-34 (Gordon & Breach Science Publishers).

1. Title: Wisconsin Breast Cancer Database (January 8, 1991)

2. Sources:
-- Dr. WIlliam H. Wolberg (physician)
University of Wisconsin Hospitals
USA
-- Donor: Olvi Mangasarian (mangasarian@cs.wisc.edu)
Received by David W. Aha (aha@cs.jhu.edu)
-- Date: 15 July 1992

3. Past Usage:

Attributes 2 through 10 have been used to represent instances.
Each instance has one of 2 possible classes: benign or malignant.

1. Wolberg,~W.~H., \& Mangasarian,~O.~L. (1990). Multisurface method of
pattern separation for medical diagnosis applied to breast cytology. In
{\it Proceedings of the National Academy of Sciences}, {\it 87},
9193--9196.
-- Size of data set: only 369 instances (at that point in time)
-- Collected classification results: 1 trial only
-- Two pairs of parallel hyperplanes were found to be consistent with
50% of the data
-- Accuracy on remaining 50% of dataset: 93.5%
-- Three pairs of parallel hyperplanes were found to be consistent with
67% of data
-- Accuracy on remaining 33% of dataset: 95.9%

2. Zhang,~J. (1992). Selecting typical instances in instance-based
learning.  In {\it Proceedings of the Ninth International Machine
Learning Conference} (pp. 470--479).  Aberdeen, Scotland: Morgan
Kaufmann.
-- Size of data set: only 369 instances (at that point in time)
-- Applied 4 instance-based learning algorithms
-- Collected classification results averaged over 10 trials
-- Best accuracy result:
-- 1-nearest neighbor: 93.7%
-- trained on 200 instances, tested on the other 169
-- Also of interest:
-- Using only typical instances: 92.2% (storing only 23.1 instances)
-- trained on 200 instances, tested on the other 169

4. Relevant Information:

Samples arrive periodically as Dr. Wolberg reports his clinical cases.
The database therefore reflects this chronological grouping of the data.
This grouping information appears immediately below, having been removed
from the data itself:

Group 1: 367 instances (January 1989)
Group 2:  70 instances (October 1989)
Group 3:  31 instances (February 1990)
Group 4:  17 instances (April 1990)
Group 5:  48 instances (August 1990)
Group 6:  49 instances (Updated January 1991)
Group 7:  31 instances (June 1991)
Group 8:  86 instances (November 1991)
-----------------------------------------
Total:   699 points (as of the donated datbase on 15 July 1992)

Note that the results summarized above in Past Usage refer to a dataset
of size 369, while Group 1 has only 367 instances.  This is because it
originally contained 369 instances; 2 were removed.  The following
statements summarizes changes to the original Group 1's set of data:

#####  Group 1 : 367 points: 200B 167M (January 1989)
#####  Revised Jan 10, 1991: Replaced zero bare nuclei in 1080185 & 1187805
#####  Revised Nov 22,1991: Removed 765878,4,5,9,7,10,10,10,3,8,1 no record
#####                  : Removed 484201,2,7,8,8,4,3,10,3,4,1 zero epithelial
#####                  : Changed 0 to 1 in field 6 of sample 1219406
#####                  : Changed 0 to 1 in field 8 of following sample:
#####                  : 1182404,2,3,1,1,1,2,0,1,1,1

5. Number of Instances: 699 (as of 15 July 1992)

6. Number of Attributes: 10 plus the class attribute

7. Attribute Information: (class attribute has been moved to last column)

#  Attribute                     Domain
-- -----------------------------------------
1. Sample code number            id number
2. Clump Thickness               1 - 10
3. Uniformity of Cell Size       1 - 10
4. Uniformity of Cell Shape      1 - 10
5. Marginal Adhesion             1 - 10
6. Single Epithelial Cell Size   1 - 10
7. Bare Nuclei                   1 - 10
8. Bland Chromatin               1 - 10
9. Normal Nucleoli               1 - 10
10. Mitoses                       1 - 10
11. Class:                        (2 for benign, 4 for malignant)

8. Missing attribute values: 16

There are 16 instances in Groups 1 to 6 that contain a single missing
(i.e., unavailable) attribute value, now denoted by "?".

9. Class distribution:

Benign: 458 (65.5%)
Malignant: 241 (34.5%)

## Priority search tree demo

Priority search tree demo was my final project for Brown’s Computational Geometry course. A priority search tree is a data structure that allows for efficient searching and point location in one and one-half dimensions (the upper bound on Y is missing). It is a hybrid of a heap and a balanced search tree (heap for y-coordinates, search tree for x-coordinates).

## wristsavr

wristsavr — forces you to take a break.

wristsavr saves your wrists: it periodically zwrites & xlocks your screen to remind you to take a 2 minute break. This is a little ditty that I whipped up last night to avoid working on my operating system.

usage: wristsavr [-hb] [-m mins]
-h        Display usage information.
-b        Bully mode.  If xlock terminates before two minutes,
xlock the screen again.
-m mins   wait mins minutes between wristsavr notices (defualt 45)

## Java Heap

An array-based implementation of a priority queue, using a Vector to do all of the dirty work. The HeapDescending class is probably what you’re interested in — it was implemented from the pseudocode in Cormen, Leiserson and Rivest.

## Java TreeGraphics

A java package for visualizing binary trees in ASCII text. Only the abstract superclass TreeGraphics, and concrete subclasses NullTreeGraphics and ASCIITreeGraphics have been ported from the Pascal source developed for Brown University Computer Science 16.

The TreeGraphics routines work by getting from you a pre-order traversal of your tree, in terms of calls to DrawInternal and DrawLeaf. The exact semantics of these calls are

      DrawInternal(String nodeLabel);
DrawLeaf();

For example:

                                                                     39
.------------------------------------+------------------------------------.
|                                                                         |
|                                                                         |
25                                                                        45
.-----------------+-----------------.                                     .-----------------+-----------------.
|                                   |                                     |                                   |
|                                   |                                     |                                   |
19                                  35                                   [_]                                  51
.--------+--------.                 .--------+--------.                                                       .--------+--------.
|                 |                 |                 |                                                       |                 |
|                 |                 |                 |                                                       |                 |
13               [_]               [_]                38                                                     [_]               [_]
.---+---.                                             .---+---.
|       |                                             |       |
|       |                                             |       |
[_]     [_]                                           [_]     [_]

Remember that TreeGraphics needs the calls in pre-order (root-left-right), so the sequence of calls to create this tree would have been the following:

DrawInternal("39");
DrawInternal("25");
DrawInternal("19");
DrawInternal("13");
DrawLeaf();
DrawLeaf();
DrawLeaf();
DrawInternal("35");
DrawLeaf();
DrawInternal("38");
DrawLeaf();
DrawLeaf();
DrawInternal("45");
DrawLeaf();
DrawInternal("51");
DrawLeaf();
DrawLeaf();

## An untraceable, universally verifiable voting scheme

Seminar in Cryptology
Professor Philip Klein
December 12, 1995

## Abstract

Recent electronic voting schemes have shown the ability to protect the privacy of voters and prevent the possibility of a voter from being coerced to reveal his vote. These schemes protect the voter’s identity from the vote, but do not do so unconditionally. In this paper we apply a technique called blinded signatures to a voter’s ballot so that it is impossible for anyone to trace the ballot back to the voter. We achieve the desired properties of privacy, universal verifiability, convenience and untraceability at the expense of receipt-freeness.

Full text: voting.pdf (Adobe Acrobat PDF, 47K)

## xmsg

xmsg uses Tk/Tcl and Sun RPC to pop up windows of text to a remote user. It is loosely based on the old cs project xmesg, which required you to munge with your xhost. xmsg instead uses a client-server paradigm to avoid security holes.

Unfortunately, before I could finish xmsg, the cs dept. discovered zephyr (a project at MIT), which is much better than xmsg could ever be. Thus, I never finished the project.

source (gzip’d tarfile).