Our interests are rooted in the longstanding fundamental problem of how intelligent systems can acquire features, terms, and other representational structures that are prerequisite to further learning. Simple learning can be accomplished by application of a statistical pattern-finding algorithms, but how are learned patterns employed in the grander design of sustained learning over the lifetime of an intelligent entity? In terms of Clark & Thornton's compelling distinction between Type-1 and Type-2 learning, we are intrigued with the issues involved in understanding and modeling Type-2 learning, which involves building representational mappings when no statistical relationship exists between the independent and dependent variables. Quartz has also described well this dichotomy between setting up representations (Type-2) and then making use of those representations for the better understood processes of identifying simple input/output mappings (Type-1). What mechanisms can account for sustained Type-2 and Type-1 learning? An understanding of these issues is central to attaining new levels of mechanical intelligence.
We have been working on algorithms that incidentally solve Type-1 problems when present and possible, each one producing a building block that joins a many-layered deeply-nested organization of such elements. Earlier learning lays the groundwork for later learning. We are guided by the principle that all learning is simple if the prerequisites are in place. We are examining learning from textbooks, as each text presents a large body of coherent knowledge carefully organized in terms of building blocks and their dependencies. This will be our direction for at least the next several years.
We are developing a method that applies goal regression in a brute force manner to solve a problem domain. The result is a policy, expressed as a list of decision rules. From these rules, we are examining how to extract features that allow the rules to be compressed.
UMass researchers from the Computer Vision Laboaratory and the Machine Learning Laboratory are collaborating with marine scientists from Bigelow Laboratory to build image classification systems that are capable of discriminating a large variety of plankton. The ability to automate analyses of samples in various settings will enable studies of such ocean life to be conducted at much lower cost, and in a much more timely manner. We are investigating classification algorithms, ensemble classifiers, and feature sets that enable separation of useful classes to apply to challenging real-world problems.
One would like to be able to take as input an audio signal of a music performance, and produce as output a music score for the selection that was performed. There are many subproblems, including determining the fundamental pitch events, pulse, meter, key signature, intentional variations of pitch and note duration, articulation identification, voice leading and chord identification. In addition to a score, one would like to produce a variety of analyses of a piece of music, such as a grammatical analysis that captures the structure underlying the composition.
We have developed a variety of decision tree induction algorithms over the years, including the ITI (Incremental Tree Inducer) and DMTI (Direct Metric Tree Induction) decision tree induction systems.