RADICAL in Romanian (translation provided by Marina Meila)

RADICAL is a new algorithm for Independent Components Analysis (ICA). It is accurate, fast, robust to outliers, and has a simple information theoretic justification. It has a very simple implementation and is relatively easy to understand.

There was a bug prior to version 1.2 which kept the AUG_FLAG from working properly. Try version 1.2 with AUG_FLAG set to 0, and you should see a very big speed-up. Also, try comparing demo_5d and demo_5d_fast to see the speed-up obtained when not using auxiliary points for smoothing.

RADICAL is an acronym for Robust, Accurate, Direct Independent Components Analysis aLgorithm.

We encourage you to try a variety of ICA algorithms. Some algorithms are well-suited to situations in which the data is known to have a certain form (highly kurtotic, for example). In our experiments, RADICAL on average outperformed Fast ICA, JADE, Kernel ICA, and the extended Infomax algorithms. These experiments were done making only very weak assumptions about the data, using so-called non parametric methods. If you can't provide a strong characterization of your data, we suggest trying RADICAL for ICA on your data.

There is a large literature on Independent Components Analysis, including a comprehensive book by Hyvarinen, Karhunen, and Oja. There is a web site for the book, which can be found here. To learn more about RADICAL, which was developed after the publication of the ICA book, you may read our paper in the Journal of Machine Learning Research, which is found here.

Follow the link at the top of the page for RADICAL version 1.2. Please let me know if you have problems using it.

Yes. Michael Kelm has translated RADICAL to the R language (thanks Michael!). Please note the following differences between his code and mine: 1) His code expects the data matrix to be (N x dim). 2) The data is expected to be centered. That is, it should have mean 0.

R Code for RADICAL

If your data is fundamentally discrete (rolls of a die) and there is no "topology" to it ( no values are considered closer to each other than others), then you want a discrete entropy estimator, not a continuous density estimator (like the m-spacing). Discrete entropy is pretty much just the negative mean log of the discrete probabilities. However, many people have data that represents *truncated* or *rounded* continuous values, like the brightness values in an image. In this case, you essentially want to recreate the original random variable as best you can. For example if your true image values went from 0.0-256.0, including fractional values, but have been truncated to integers, then the approach I use is to add a uniformly distributed random number to each value to attempt to recreate samples from the original distribution. These values will, with high probability not repeat, and you can get an appropriate answer with an m-spacings estimate. In short, add random variates to your values that are uniformly distributed over the range of truncation.

If you have questions about RADICAL, please email me at

elm"at"cs"dot"umass"dot"edu