CV        Home        Publications        Professional Activities        Softwares        Awards        Collaborators        Contact        Album


Research Topic: Large-Scale Sparse Learning

Sparse Learning has recently become a popular research topic, due to its ability of conducting simultaneous classification and feature selection.The most well-known sparsity inducing norm is the L1 norm, which is convex and has many favorable theoretical results.

In many applications, the data features can have some intrinsic structures, e.g. the features may have some meaningful order. Such structure can be used as the prior information for the model to be learned. When incorporating the structure information into sparse learning, we have the so-called structured sparse learning. It has been shown that, the structured sparse learning can help achieve improved regression/classification performance, and yield the interpretable results via the identified markers. The current research on structured sparse learning are based on the various extensions of the L1 norm:

I mainly study the efficient optimization of the different structured sparse learning models, and have developed the SLEP (Sparse Learning with Efficient Projections) package. The main challenge of the different sparse learning formulations is that, the penalty terms are nonsmooth. To make use of the existing first-order optimization methods such as the accelerated gradient descent, a key building block is to efficiently solving the associated Euclidean projections (proximal operators, Moreau-Yosida regularization). For different penalties, we make use of different techniques that utilize the specific structures for fast convergence.

The current version of the SLEP package is implemented in Matlab, with the associated Euclidean projections in C. New functions / features are being added into the SLEP package. The C version shall be distributed soon.

 

My contributions on this topic:

Some useful sources on this topic:

[Go to top]