Evaluation of Colour Recognition Algorithms with a Palette Designed for Applications which Aid People with Visual Impairment
Автор: Bartosz Papis
Журнал: International Journal of Image, Graphics and Signal Processing(IJIGSP) @ijigsp
Статья в выпуске: 12 vol.6, 2014 года.
Бесплатный доступ
This paper presents the evaluation of three machine learning algorithms applied to colour recognition. The “primary” colour palette is defined in accordance with the results from social sciences. Decision Trees, Support Vector Machines and k-Nearest Neighbours classifiers are being tested on various data sets created for this purpose. One of the distance measures for the k-Nearest Neighbour classifier considered is DeltaE2000 - the standard colour difference formula, designed in conformance with human perception. Additionally, we compare these algorithms to various colour recognition applications available.
Color, computer vision, machine learning
Короткий адрес: https://sciup.org/15013452
IDR: 15013452
Текст научной статьи Evaluation of Colour Recognition Algorithms with a Palette Designed for Applications which Aid People with Visual Impairment
Recent advances in touch screen technology in the context of mobile devices, have opened a wide range of possibilities for making these devices more accessible for the visually impaired. The iPhone series offers the VoiceOver application which by audible and interactive speaking helps navigating through what is on the screen, and what the user is pointing at. For the Pocket PC device the Slide Rule framework was presented [1], which introduces a comprehensive design of eyes-free interface. In the context of colour recognition, many applications and hardware solutions were introduced. A few examples of the latter are: Color Test and Colorino handheld devices, as well as Eye Ring worn device [2]. Mobile phone applications include: Color Visor, Colored Eye, Color Identifier, aidColors, HueVue, Say Color, iColorName4 and Vision Hunt for iPhone devices, along with Color ID, Color Detector and Color Grab for Android. Users’ convenience and the high price of dedicated devices have pushed us to focus on applications for mobile tools, in particular for iPhone devices, which are well-designed support for the visually impaired in general [3]. All applications use different colour palettes, and most of them use a different notion of primary colours. The majority of them also apply rather complicated palettes (with some exceptions, like aidColors which recognizes six colours). This feature makes their comparison difficult, and puts to question the guidelines which were followed for choosing a particular set of colours.
The goal of this work is to propose a palette of primary colours, in accordance with published results regarding human colour vision. Furthermore, the official standard colour measure, DeltaE2000 is evaluated against numerous different colour space metrics, using machine learning techniques.
In Section II a colour palette is defined on the basis of some insights into human colour perception according to social sciences. Section III presents data sets that are used to evaluate classifiers presented in the following Sec. IV. Subsequent Sec. V introduces details of performed experiments and their results, including comparison with existing colour recognition applications. Section VI demonstrates conclusions.
-
II. Palette
Colour recognition task requires to choose a palette of available colours, that will form a set of target categories for a classification. This problem is not trivial, as even for people without any vision impairment the notion of “colour” is highly subjective [4]. To make the application useful to the widest possible audience, the following distinction can be noted:
-
1) People who were born without sight, or lost their sight very early
-
2) People who lost their sight during their lifetime
-
3) People with CVI (Cortical Visual Impairment)
-
4) People with total colour blindness
-
5) People with partial red-green colour blindness
-
6) People with partial blue-yellow colour blindness
The palette should contain a set of colours, which will be useful to people from all of the groups.
The following reasoning will introduce parts of the final palette, along with premises that justify their usage. The colour coordinates are scaled to the range: [0 , 255], and are presented in the sRGB [5] colour space, in format ( R, G, B ). Colours are mixed according to the following formulas:
R= ⌈5 1+ e2⌉
G=⌈ 2 1+2 =2⌉
rS1+2 в2
B =⌈ 2 ⌉
For the first group of people, the notion of colour is irrelevant. The palette should thus only include the most primary colours, probably those that commonly encode some additional meaning (e.g. red - danger). The “most primary” colours can be defined referring to the opponent-process theory [6], [7], which describes the psychological primary colours:
-
1) black (0 , 0 , 0)
-
2) white (255 , 255 , 255)
-
3) green (0 , 255 , 0)
-
4) red (255 , 00)
-
5) yellow (255 , 255 , 0)
-
6) blue (0 , 0 , 255)
The second group of people, who lost their sight during their lifetime, recognizes the notion of colour. The third group, people with CVI can usually distinguish between primary colours [8]. To take these groups of people into consideration, the palette should be extended with a more advanced set of colours. The second order colours can be generated from the set of psychological primary colours. “black” and “white” colours were excluded from the mixing procedure, as the notion of lightness is a different category than hue [9], [10]. Thus, the second order colours are as follows:
-
1) olive (128, 128, 0) = green + red
-
2) chartreuse (128, 255, 0) = green + yellow
-
3) teal (0, 128, 128) = green + red
-
4) orange (255, 128, 0) = red + yellow
-
5) purple (128, 0, 128) = red + blue
-
6) gray (128, 128, 128) = yellow + blue
Another set of “advanced” colours is defined by Berlin and Kay [11], and by Kay and Maffi [12], depending on the notion of the culture level . Colours for all levels include additionally:
-
1) brown (150, 75, 0)
-
2) pink (255, 192, 203)
-
3) orange (255, 165, 0)
There is a clash between two different versions of “orange” colour in these two sets. The latter is chosen here arbitrarily.
These sets already include colours which are not perceived by people from the three groups relating to colour blindness. Thus, the final palette proposed is as follows:
-
3) green (0, 255, 0)
-
4) red (255, 0, 0)
-
5) yellow (255, 255, 0)
-
6) blue (0, 0, 255)
-
7) olive (128, 128, 0)
-
8) chartreuse (128, 255, 0)
-
9) teal (0, 128, 128)
-
10) purple (128, 0, 128)
-
11) gray (128, 128, 128)
-
12) brown (150, 75, 0)
-
13) pink (255, 192, 203)
-
14) orange (255, 165, 0)
Apart from hue, recognized colours should also be described with their brightness, warmness etc., and possibly with some corresponding real world objects (e.g. green - grass). However, it seems to be easy to add these descriptions to each colour, hence this paper focuses only on hue recognition. Moreover, the colour names presented here are just for the purpose of this paper - in the user interface they should be given names which are more meaningful. For example: it is probably better to say that a colour is between “green” and “yellow”, instead of using the name “chartreuse”.
-
III. Data Sets
In this work, two types of data sets are used: pixel data sets and photograph data sets. Pixel data sets contain pairs of an RGB encoded colour value with its name. Photograph data sets contain photographs of real-world objects, made by iPhone 5, labelled with the name of the dominant colour.
-
A. Pixel Data Set
RGB encoded values used for creation of pixel data set were taken from the three available on-line data sets:
-
1) Colour database from “Name that color” script [13] referred to as the NTC data set
-
2) “RAL Classic collection” colour list [14] - referred to as the RAL data set
-
3) “Resene RGB Values List” colour list [15] -
- referred to as the RESENE data set
Colour labels in these data sets were then altered to match the palette defined in the List 1, with automatic process based on the “Encycolorpedia” website [16]. For each colour sample the steps of this automated process were as follows:
-
1) Download website contents from URL http://encycolorpedia.com/RRGGBB where
RRGGBB corresponds to RGB hexadecimal values of the processed colour sample
-
2) Read the first sentence of the colour description paragraph
-
3) If the sentence contains words “very light” set the label to “white”
-
4) If the sentence contains words “very dark” set the label to “black”
-
5) Otherwise process the last word (i.e. string delimited with a space and a dot) of this sentence
-
6) If the word contains hyphen, split it and process each sub-word separately
-
7) Trim and lowercase each sub-word
-
8) Translate “cyan” to “teal”, “magenta” to “purple” and “violet” to “purple” for each sub-word
-
9) If matching label was found in the defined palette (List 1), set the label to the first matching label
-
10) If no matching label was found, set the label to all subwords concatenated with “ or ” conjunction between them
The results from this process were concatenated with “or” conjunction with entries from the defined palette (List 1) that match the label from the original labels.
Ambiguous labels (those which contain more than one word separated with “or”) were thereafter manually and arbitrarily corrected. The manual corrections were as follows:
-
1) In the NTC data set :
-
(1) Colour 044259 label “blue or teal” changed to “teal”
-
(2) Colour 24500 F label “black or olive” changed to “olive”
-
(3) Colour ADFF 2 F label “green or yellow” changed to “green”
-
(4) Colour A 9 A 491 label “olive or gray” changed to “olive”
-
(5) Colour B 5 B 35 C label “green or olive” changed
to “olive”
-
(6) Colour DFFF 00 label “yellow or chartreuse” changed to “chartreuse”
-
(7) Colour E 8 F 1 D 4 label “white or chartreuse” changed to “chartreuse”
-
(8) Colour FEFCED label “white or orange” changed to “yellow”
-
(9) Colour FF 3 F 34 label “red or orange” changed to “red”
-
(10) Colour FFAE 42 label “yellow or orange” changed to “brown”
-
(11) Colour FFFEF 6 label “black or white”
changed to “white”
-
2) In the RAL data set:
-
(1) Colour 1 F 3438 label “green or blue” changed to “teal”
-
(2) Colour 18171 C label “black or blue” changed to “blue”
-
(3) Colour 1 F 3A3 D label “green or blue” changed to “teal”
-
3) In the RESENE data set:
-
(1) Colour 254636 label “bush or bottlegreen” changed to “bottlegreen”
The resulting pixel data set contains 3011 samples with the following class distribution:
-
1) 112 black samples (∼ 3 . 09%)
-
2) 363 white samples (∼ 12 . 06%)
-
3) 192 green samples (∼ 6 . 38%)
-
4) 311 red samples (∼ 10 . 33%)
-
5) 488 yellow samples (∼ 16 . 21%)
-
6) 253 blue samples (∼ 8 . 40%)
-
7) 7 olive samples (∼ 0 . 23%)
-
8) 2 chartreuse samples (∼ 0 . 07%)
-
9) 459 teal samples (∼ 15 . 24%)
-
10) 120 purple samples (∼ 3 . 99%)
-
11) 41 gray samples (∼ 1 . 36%)
-
12) 228 brown samples (∼ 7 . 57%)
-
13) 218 pink samples (∼ 7 . 24%)
-
14) 217 orange samples (∼ 7 . 21%)
-
B. Textile Data Set
Textile data set was created by taking pictures of pieces of cloth. Two instances of this data set were created with different lighting conditions, five photographs for each colour of the palette (List 1), taken with the iPhone 5 mobile device. These instances are being further referred to as textile1 and textile2 data sets.
-
C. Cloth Data Set
Cloth data set was created by taking pictures of ordinary clothes. Ten photographs for each colour were taken, using the iPhone 5 mobile device. This data set is used in the final test, therefore, existing colour recognition applications were evaluated ten times for each colour in the same conditions as the photographs were taken.
-
IV. Classifiers
The following classifiers were evaluated:
-
1) Decision tree [17], with C4.5 learning algorithm [18] (Tree)
-
2) Support vector machine [19] (SVM)
-
3) k-nearest neighbours [20] (k-NN)
The tree classifier was tuned with the following parameters:
-
1) Attribute reuse factor (k ∈ [1,8])
-
2) Colour space cs ∈ {RGB,HSV,LAB,LCH,XYZ}
The attribute reuse factor artificially increases the number of attributes by duplicating them under different names. In the implementation of C4.5 algorithm used here, each attribute can appear only once in the resulting decision tree. Owing to this parameter the attributes can be reused in the tree’s nodes. This parameter also indirectly controls the height of the tree. The second, colour space, parameter controls the colour space to which samples are being converted before being presented to the tree. Even if some colour spaces are a result of simple transformations, this parameter is useful, because decision tree can produce decision boundaries only orthogonal to the coordinate axes.
For the SVM classifier the following kernels were considered:
-
1) Gaussian kernel:
Gaussian(γ) ≡ k(x i ,x j ) = exp(-γkx i -x j k2)
-
2) Linear kernel:
Linear(c) ≡ k(x i ,x j ) = c+x i x j
-
3) Polynomial kernel:
Polynomial(c,d) ≡ k(x i ,x j ) = (c+x i x j )d
The parameters for the SVM classifier were as follows: for the Gaussian kernel: у E {0 . 001 , 0 . 01 , 0 . 1 , 1 , 2 , 5 , 10 , 100}, for the Linear kernel: c E {0 . 001 , 0 . 01 , 0 . 1 , 1 , 10 , 100} and for the Polynomial kernel: d E 2 , 3 and c E
-
{0 . 001 , 0 . 01 , 0 . 1 , 1 , 2 , 5 , 10 , 100}. Furthermore, different colour spaces were evaluated: cs E { RGB, HSV, LAB, LCH, XYZ }. The SVM was trained in the one-against-one paradigm to support multi-class classification. Results for one-against-all were poorer in almost all cases.
For the k-NN classifier various distance measures were considered. The simplest ones are based on the Manhattan distance (the L 1 norm) or the Euclidean distance (the L 2 norm) in a chosen colour space. The Euclidean measures employ the formula:
d(C1,C2) = [(C1(1) - C2(1))2 + (C1(2) - C2M)2 +
( C1(3)-C2(3))2]l where C1 and C2 are two points in one of {RGB,HSV,LAB} colour spaces, and the (1), (2) and the (3) subscripts denote values of subsequent coordinates in the context of a particular colour space.
Similarly, the Manhattan distance can be defined as:
d(C1,C2) = |C1M - С2(1)| + |С1(2) - C2(2)| + |C1(3) -C 2(3)l
Particular distance measures are given names composed of the colour space and the underlying norm, and they are as follows: RgbEuclidean , RgbManhattan , HsvEuclidean , HsvManhattan , LabManhattan . The LabEuclidean distance measure is called DeltaE 76, as it is equivalent with the standard colour metric, defined by the Commission Internationale de l’ÃL’clairage [22], [23], [24]. The two successors of this standard are also evaluated: DeltaE 94 [25] and DeltaE 2000 [26].
Thus, the k-NN classifier was tested with the following parameters:
-
2) Distance metric dm ∈ {
– DeltaE 2000( lWeight,cWeight,hWeight )
– DeltaE 76( lWeight,aWeight,bWeight )
– DeltaE 94( lWeight,aWeight,bWeight )
– HsvEuclidean ( hWeight,sWeight,vWeight )
– HsvManhattan ( hWeight,sWeight,vWeight )
– LabManhattan ( lWeight,aWeight,bWeight )
– RgbEuclidean ( rWeight,gWeight,bWeight )
– RgbManhattan ( rWeight,gWeight,bWeight ) } where all weights for distance metrics are in the range [0 , 1] and they sum up to 1.
-
V. Experiments
-
A. Pixel Data Set Evaluation
The pixel data set was used entirely for the purpose of classifier tuning, using the leave-one-out cross-validation technique. Classifiers with their best parameters were then trained using the pixel data set, and evaluated on both the textile data sets.
The results in Tab. I demonstrate that classification of pixels, in terms of the palette defined in the List 1, is a classification problem that can be solved depending mainly on the colour’s value (or lightness , or brightness ). This is evident when looking at dominant value ’s weight in the result for the k-NN classifier. Evaluation on textile data sets, presented in Tab. II and Tab. III, shows that the colour perception in real-world conditions is not related to digital colour appearance. Without the influence of lighting conditions the colours from a simple palette are easy to distinguish, however, this is not the case with real photograph data.
-
B. Textile Data Set Evaluation
The failure of the pixel data set to correctly represent the real-world notion of colour in the previous subsection, leads to an attempt to tune the classifiers on the textile1 data set. The results of tuning are presented in Tab. IV. As expected, the classifiers tuned on textile1 data set work a bit better on textile2 data set. Both data sets contain real-world data, however different lighting conditions still lead to poor performance on textile2 data set. The results of this evaluation are presented in Tab. V. Note that with real-world data, the classifier employing DeltaE 2000 standard colour distance is taking the lead.
-
C. Classifier Tuning Using Both The Textile Data Sets
To evaluate the classifiers trained with data containing different lighting conditions, both the textile data sets were mixed together. The resulting data set was split into the training and testing data set with 7/3 ratio. This gives seven samples for each label in the training set (7*14 = 98 total training samples), and three samples for each label in the testing set (3*14 = 54 total testing samples). Tuning results are presented in Tab. VI, and evaluation results in Tab. VII. All classifiers seem to give an acceptable performance. The SVM classifier yields the best results, followed by the k-NN using the DeltaE2000 colour distance.
-
D. Comparison with the Existing Applications
In this section, we compare our tuned classifiers with the existing applications. The main difficulty in this task stems from the fact that each application uses a different palette, and in most applications these palettes are not available. We interpret the results of our classifiers strictly, as in the previous experiments - i.e. the result must exactly match the class label. For other applications, the result is interpreted as correct, if it seems acceptable from the user’s point of view. While this is of course arbitrary, we want to make the comparison as fair as possible and thus, the results are interpreted in a very flexible way, in favor of the tested application. For example, “tomato” is an acceptable answer for “orange” samples, “blue” is accepted for “turquoise” samples (in case an application does not have “turquoise” in its palette), and so is “malachite” for “green”. Rejected answers include “aqua” for “green” or “gray” for “yellow” or “white”. Additionally, flickering in application’s presented result for too long was interpreted as “no answer”. The tests were performed using the cloth data set, which contains 140 samples (ten for each label). The results are presented in Tab. VIII.
The best accuracy was reached by the aid Colors application (with “green” and “yellow” as accepted answers for “olive” and “chartreuse” resp.). The second best algorithm
Table 1.Classifier Tuning On The Pixel Data Set
Classifier |
Parameters |
Accuracy |
Tree |
cs = HSV , k = 5 |
92% |
SVM |
cs = LCH , kernel = Gaussian ( γ = 10) |
88% |
k-NN |
dm = HsvManhattan (0 . 02 , 0 . 04 , 0 . 94), k = 7 |
90% |
Table 2.Evaluation Of Classifiers Tuned With The Pixel Data Set On The Textile1 Data Set
Classifier |
Parameters |
Accuracy |
Tree |
cs = HSV , k = 5 |
27% |
SVM |
cs = LCH , kernel = Gaussian ( γ = 10) |
25% |
k-NN |
dm = HsvManhattan (0 . 02 , 0 . 04 , 0 . 94), k = 7 |
20% |
Table 3.Evaluation Of Classifiers Tuned With The Pixel Data Set On The Textile2 Data Set
Classifier |
Parameters |
Accuracy |
Tree |
cs = HSV , k = 5 |
37% |
SVM |
cs = LCH , kernel = Gaussian ( γ = 10) |
36% |
k-NN |
dm = HsvManhattan (0 . 02 , 0 . 04 , 0 . 94), k = 7 |
29% |
Table 4.Classifier Tuning On The Textile1 Data Set
Classifier |
Parameters |
Accuracy |
Tree |
cs = HSV , k = 2 |
80% |
SVM |
cs = RGB , kernel = Linear ( C = 100) |
79% |
k-NN |
dm = DeltaE 200(0 . 36 , 0 . 38 , 0 . 26), k = 5 |
89% |
Table 5.Evaluation Of Classifiers Tuned With The Textile1 Data Set On The Textile2 Data Set
Classifier |
Parameters |
Accuracy |
Tree |
cs = HSV , k = 2 |
36% |
SVM |
cs = RGB , kernel = Linear ( C = 100) |
44% |
k-NN |
dm = DeltaE 2000(0 . 36 , 0 . 38 , 0 . 26), k = 5 |
49% |
Table 6.Classifier Tuning On The Training Part Of Both The Textile Data Sets
Classifier |
Parameters |
Accuracy |
Tree |
cs = LAB , k = 3 |
76% |
SVM |
cs = LAB , kernel = Gaussian ( S = 10) |
71% |
k-NN |
dm = DeltaE 2000(0 . 52 , 0 . 2 , 0 . 28), k = 1 |
89% |
Table 7.Evaluation Of Classifiers Tuned With The Training Part Of Both The Textile Data Sets On The Testing Part
Classifier |
Parameters |
Accuracy |
Tree |
cs = LAB , k = 3 |
83% |
SVM |
cs = LAB , kernel = Gaussian ( S = 10) |
90% |
k-NN |
dm = DeltaE 2000(0 . 52 , 0 . 2 , 0 . 28), k = 1 |
88% |
Table 8.Final Evaluation On Cloth Data Set
Classifier |
Parameters |
Accuracy |
Color Visor 2.0 |
palette: ColorVisor extended |
29% |
Say Color 1.1.0 |
- |
35% |
aidColors 1.2 |
- |
59% |
iColorNamer 4 |
- |
10% |
Vision Hunt v2.6 build 1 |
- |
41% |
Color ID 2.3 |
- |
17% |
ColoredEye 2.5.1 |
- |
25% |
Tree |
cs = LAB , k = 3 |
34% |
SVM |
cs = LAB , kernel = Gaussian ( S = 10) |
46% |
k-NN |
dm = DeltaE 2000(0 . 52 , 0 . 2 , 0 . 28), k = 1 |
51% |
Table 9.Final Out-Of-Sample Accuracy Estimation
-
E. Tests Using All Photograph Data Sets
For the final test, we mix together both the textile data sets and the cloth data set. The resulting set is split in half to create training and testing data sets. The data from all data sets are interlaced, so all data sets contribute to both the training and the testing set. As in the previous experiments, we tune algorithm’s parameters on the training set using leave-one-out cross-validation technique to estimate their best parameters. Using these parameters, we train them using the training set, and test using the testing set. The result of this evaluation is presented in Tab. IX. The best result, both in tuning and testing phase, was achieved by the k-NN classifier.
-
VI. Conclusion
On the basis of theoretical and experimental results from psychology, anthropology and linguistics the primary colour palette is introduced and related to the most common types of visual impairments. The two types of data sets were constructed with this palette: the pixel data set, established by means of semi-automated relabelling of freely available colour data sets and the photograph data set, established by manually taking 280 photographs of real-world pieces of cloth, using the iPhone 5 mobile device. The first evaluation familiar to the author was presented, regarding machine learning techniques applied to colour recognition. Additionally, commonly available mobile applications were tested and compared with the proposed approach. Although the best result was achieved by the aidColors application, we argue that our solution uses the palette that is more suitable for the wider audience. The palettes used by other applications are not available, so the presented comparison is not necessarily conclusive. The second best result in terms of accuracy, and the best one among the algorithms implemented for the purpose of this work, is achieved by the k-NN classifier, using the standard DeltaE 2000 colour distance formula [26]. The experiments also show that the pixel data sets, available on-line, do not seem to be very useful for the task of practical, real-world colour classification. To achieve good accuracy, it is essential to train algorithms on data with different lighting conditions.
Finally, an overall estimation of the out-of-sample accuracy of our algorithms is presented. The best, 90% out-of-sample accuracy was achieved by the k-NN classifier using DeltaE 2000 distance formula.
Acknowledgment
The author would like to thank Mateusz Kuźmiński, Adam Bancarewicz, Bolesław Tekielski and Rajmund Kożuszek for their indispensable help with gathering test data and for valuable discussions.
Funding
This work was supported within the “Seeing Assistant”
Research and Development project, funded from Structural founds 2007-2013, Measure 1.4 Operational Programme Innovative Economy (OP IE).
Список литературы Evaluation of Colour Recognition Algorithms with a Palette Designed for Applications which Aid People with Visual Impairment
- S. K. Kane, J. P. Bigham, and J. O. Wobbrock, “Slide rule: Making mobile touch screens accessible to blind people using multi-touch interaction techniques,” in Proceedings of the 10th International ACM SIGACCESS Conference on Computers and Accessibility, ser. Assets ’08. New York, NY, USA: ACM, 2008, pp. 73–80.
- S. Nanayakkara, R. Shilkrot, and P. Maes, “Eyering: A finger-worn assistant,” in CHI ’12 Extended Abstracts on Human Factors in Computing Systems, ser. CHI EA ’12. New York, NY, USA: ACM, 2012, pp.1961–1966.
- M. E. Wong and S. S. K. Tan, “Teaching the benefits of smart phone technology to blind consumers: Exploring the potential of the iphone,” Journal of Visual Impairment and Blindness, vol. 106, no. 10, pp. 646– 650, 2012.
- S. C. Levinson, “Yél? dnye and the theory of basic color terms,” Journal of Linguistic Anthropology, vol. 10, no. 1, pp. 3–55, 2000.
- M. Stokes, M. Anderson, S. Chandrasekar, and R. Motta, “A standard default color space for the internet - srgb,” 1996. [Online]. Available: http://www.w3.org/Graphics/Color/sRGB.html (2014-07-23).
- E. Hering, Grundz?ijge der Lehre vom Lichtsinn. Berlin: Springer, 1920, english version: Outlines of a Theory of the Light Sense. 1964. Translated by L.M. Hurvich and Dorothea Jameson. Cambridge, MA: Harvard University Press.
- D. Jameson and L. M. Hurvich, “Opponent-response functions related to measured cone photo-pigments,” Journal of the Optical Society of America, vol. 58, pp. 429–430, 1968.
- J. E. J. Maryke Groenveld and P. Leader, “Observations on the habilitation of children with cortical visual impairment,” Journal of Visual Impairment and Blindness, vol. 84, no. 1, pp. 11–15, 1990.
- P. J. Greenfeld, “What is grey, brown, pink, and sometimes purple: The range of "wild-card" color terms,” American Anthropologist, vol. 88, no. 4, pp. 908–916, 1986.
- P. Kay and L. Maffi, “Color appearance and the emergence and evolution of basic color lexicons,” American Anthropologist, vol. 101, no. 4, pp. 743–760, 1999.
- B. Berlin and P. Kay, Basic Color Terms: Their Universality and Evolution, ser. Center for the Study of Language and Information Lecture Notes Series. C S L I Publications/Center for the Study of Language & Information, 1999.
- P. Kay and L. Maffi, Number of Basic Colour Categories. Max Planck Institute for Evolutionary Anthropology, 2013, ch. Number of Basic Colour Categories.
- C. Mehta, “Name that color,” 2007. [Online]. Available: http://chir.ag/projects/ntc (2014-07-23)
- I. C. for Delivery Terms and Q. Assurance, “Ral classic collection,” 1980. [Online]. Available: http://www.ralcolor.com/ (2014-07-23)
- R. P. Ltd, “Resene rgb values list,” 2001. [Online]. Available: http://www.resene.co.nz (2014-07-23)
- M. Gallagher, “Encycolorpedia,” 2013. [Online]. Available: http://encycolorpedia.com/ (2014-07-23)
- L. Rokach, Data Mining with Decision Trees: Theory and Applications, ser. Series in machine perception and artificial intelligence. World Scientific Publishing Company, Incorporated, 2008.
- R. J. Quinlan, C4.5: Programs for Machine Learning. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1993.
- C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995.
- N. S. Altman, “An introduction to kernel and nearest-neighbor nonparametric regression,” The American Statistician, vol. 46, no. 3, pp.175–185, 1992.
- C. R. de Souza, “The accord.net framework,” 2009.
- C. I. de l’?L’clairage, “Joint iso/cie standard: Cie colorimetry part 4: 1976 l*a*b* colour space,” ISO 11664-4:2008(E)/CIE S 014-4/E: 2007, Tech. Rep., 2007.
- “Colorimetry,” Commission Internationale de l’?L’clairage, Tech. Rep. CIE Publication No. 15.2, 1986.
- F. W. Billmeyer, “Commission internationale de l’?L’clairage, standard on colorimetric illuminants, publication cie no. s 001, 20 pp.” Color Research & Application, vol. 13, no. 1, pp. 65–66, 1988.
- C. I. de l’?L’clairage, “Industrial color difference evaluation,” 1995.
- G. Sharma, W. Wu, and E. N. Dalal, “The ciede2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations,” Color Research & Application, vol. 30, no. 1, pp. 21–30, 2005.