CV        Home        Publications        Professional Activities        Softwares        Awards        Collaborators        Contact        Album


Jun Liu's Publications      2010        2009        2008        2007        2006        2005        2004


2010

Moreau-Yosida Regularization for Grouped Tree Structure Learning        [PDF]        [Code]        [Bib \cite{Liu:nips:2010}]       [Poster]

Jun Liu and Jieping Ye

Advances in Neural Information Processing Systems (NIPS) 2010

We developed an efficient algorithm for the tree structured group Lasso, which is an extension of our UAI'09 paper.

We showed that, the Moreau-Yosida regularization associated with the tree structured group Lasso, which is one of the key building blocks, can be computed analytically.

We also showed how to specify the effective interval for the regularization parameter, which is key to the practical application of this method.

The main technique employed in the proof is the subdifferential, which has played important roles in non-smooth convex analysis.

Experimental results on facial expression classification were reported, where image pixels are represented as a tree structure.

 

Efficient L1/Lq Norm Regularization        [PDF]        [Bib \cite{Liu:L1Lq:2010}]

Jun Liu and Jieping Ye

arXiv:1009.4766v1

We considered the group Lasso via L1/Lq regularization. One key subroutine is the L1/Lq regularized Euclidean projection.

We revealed the key theoretical properties of the L1/Lq regularized Euclidean projection, which explains why the computation for the general q is significantly more challenging than q=2 and infinity.

We proposed to efficiently compute the L1/Lq regularized Euclidean projection by solving two zero finding problems.

 

Fast Overlapping Group Lasso        [PDF]        [Bib \cite{Liu:fogl:2010}]

Jun Liu and Jieping Ye

arXiv:1009.0306v1

We considered the efficient optimization of the overlapping group Lasso penalized problem.

We revealed several key properties of the proximal operator associated with the overlapping group Lasso.

We computed the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization.

 

An Efficient Algorithm for a Class of  Fused Lasso Problems        [PDF]        [Code]        [Bib \cite{Liu:kdd:2010}]       [Poster]

Jun Liu, Lei Yuan, and Jieping Ye

ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) 2010

The fused Lasso penalty enforces sparsity in both the coefficients and their successive differences, which is desirable for applications with features ordered in some meaningful way.

We proposed an efficient algorithm for solving this class of problems.

A key building block in each iteration is the fused Lasso signal approximator, for which we proposed a Subgradient Finding Algorithm (SFA).

The proposed SFA can 1) control the duality gap, 2) be significantly accelerated with a novel restart technique; and 3) allow the warm start.

 

Mining Sparse Representations: Formulations, Algorithms, and Applications    [PDF]        [Code]        [Bib \cite{Liu:sdm:2010}]

Jun Liu, Shuiwang Ji, and Jieping Ye

SIAM Conference on Data Mining (SDM) 2010

This tutorial is on different sparse Formulations, Algorithms, and Applications.

 

Generalized Low Rank Approximations of Matrices Revisited        [PDF]        [Bib \cite{Liu:tnn:2010}]

Jun Liu, Songcan Chen, Zhihua Zhou and Xiaoyang Tan

IEEE Transactions on Neural Networks, 21, no. 4 (2010): 621-632

Generalized Low Rank Approximations of Matrices (GLRAM) treats data as a two-dimensional data, and uses the left and the right transformation matrices for dimensionality reduction.

We revisited the GLRAM to 1) reveal its relationship with Singular Value Decomposition, and 2) explore when and why it shall yield good compression performance.

 

The Group Dantzig Selector        [PDF]        [Bib \cite{Liu:aistats:2010}]

Han Liu, Jian Zhang, Xiaoye Jiang, and Jun Liu

International Conference on Artificial Intelligence and Statistics (AI and Statistics) 2010

Group Dantzig Selector is a new method for high dimensional sparse regression with group structure.

We obtained improved theoretical results than Dantzig Selector, under a group restricted isometry condition.

 

Face Liveness Detection from A Single Image with Sparse Low Rank Bilinear Discriminative Model        [PDF]        [Bib \cite{Tan:eccv:2010}]

Xiaoyang Tan, Yi Li, Jun Liu, and Lin Jiang

European Conference on Computer Vision (ECCV) 2010

How to cope with spoofing with photograph or video in a face recognition system?

 

[Go to top]


2009

SLEP: Sparse Learning with Efficient Projections       [PDF]        [Code]        [Bib \cite{Liu:2009:SLEP:manual}]

Jun Liu, Shuiwang Ji, and Jieping Ye

Arizona State University 2009

This is the manual for the SLEP package developed and maintained at Arizona State University.

 

Multi-Task Feature Learning Via Efficient L2,1-Norm Minimization        [PDF]        [Code]        [Bib \cite{Liu:uai:2009}]        [Poster]

Jun Liu, Shuiwang Ji, and Jieping Ye

Conference on Uncertainty in Artificial Intelligence (UAI) 2009

One way for considering task relatedness in multi-task learning is to assume that different tasks share a common set of features.

The L2,1-Norm can induce a solution with group sparsity, thus enabling feature selecting in the group manner.

We proposed several reformulations which can be solved efficiently via the Nesterov's method.

 

Efficient Euclidean Projections in Linear Time        [PDF]        [Code]        [Bib \cite{Liu:icml:2009}]        [Poster]

Jun Liu and Jieping Ye

International Conference on Machine Learning (ICML) 2009

The one-norm ball constraint is widely employed in sparse learning for introducing the desired sparsity.

We showed that the Euclidean projection onto the L1-norm ball can be converted to a zero finding problem.

We proposed an improved bisection algorithm for finding the desired root.

Our algorithm can solve the problem with size10M in about 0.2 seconds on a personal computer.

 

Large-Scale Sparse Logistic Regression        [PDF]        [Code]        [Bib \cite{Liu:kdd:2009}]

Jun Liu, Jianhui Chen, and Jieping Ye

ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) 2009

We proposed the Lassplore algorithm for the L1-norm regularized logistic regression, by using the Nesterov's method.

An adaptive line search scheme for the Nesterov's method is developed, which allows adaptively tuning the step size.

 

Learning the Optimal Neighborhood Kernel for Classification        [PDF]        [Bib \cite{Liu:ijcai:2009}]

Jun Liu, Jianhui Chen, Songcan Chen, and Jieping Ye

International Joint Conference on Artificial Intelligence (IJCAI) 2009

We proposed to treat the pre-specified kernel matrix as a noisy observation of a given ideal kernel.

The optimal neighborhood kernel is learned in the neighborhood of the pre-specified kernel matrix for classification.

 

Mining Brain Region Connectivity for Alzheimer's Disease Study via Sparse Inverse Covariance Estimation        [PDF]        [Bib \cite{Sun:kdd:2009}]

Liang Sun, Rinkal Patel, Jun Liu, Kewei Chen, Teresa Wu, Jing Li, Eric Reiman, and Jieping Ye

ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) 2009

We applied the sparse inverse covariance estimation to mine brain region connectivity for Alzheimer's Disease study.

Our proposed method allowed the user feedback (e.g., prior domain knowledge) to be incorporated into the estimation process.

 

A Convex Formulation for Learning Shared Structures from Multiple Tasks        [PDF]        [Bib \cite{Chen:icml:2009}]

Jianhui Chen, Lei Tan, Jun Liu, and Jieping Ye

International Conference on Machine Learning (ICML) 2009

We presented an improved formulation (iASO) for multi-task learning based on the non-convex alternating structure optimization (ASO) algorithm.

The  iASO, a non-convex formulation, was converted into a relaxed convex one.

We proposed an alternating optimization (cASO) algorithm which solves the convex relaxation efficiently

A theoretical condition, under which cASO can find a globally optimal solution to iASO, was given.

 

Efficient Recovery of Jointly Sparse Vectors        [PDF]        [Bib \cite{Sun:nips:2009}]

Liang Sun, Jun Liu, Jianhui Chen, and Jieping Ye

Advances in Neural Information Processing Systems (NIPS) 2009

Multiple Measurement Vector (MMV) is an extension of the single measurement vector model employed in standard compressive sensing.

We considered the reconstruction of sparse signals in the MMV model, in which the signal, represented as a matrix, consists of a set of jointly sparse vectors.

We proposed a new (dual) reformulation of the convex optimization problem in MMV and develop an efficient algorithm based on the prox-method.

 

Learning Brain Connectivity of Alzheimer's Disease from Neuroimaging Data        [PDF]        [Code]        [Bib \cite{Huang:nips:2009}]        [Poster]

Shuai Huang, Jing Li, Liang Sun, Jun Liu, TeresaWu, Kewei Chen, Adam Fleisher, Eric Reiman, and Jieping Ye

Advances in Neural Information Processing Systems (NIPS) 2009

We reveal an important "monotone property" of the sparse inverse covariance estimation .

We studied the connectivity of Alzheimer's Disease, and obtained interesting results.

 

Semi-Random Subspace Method for Face Recognition        [PDF]        [Bib \cite{Zhu:ivc:2009}]

Yuliang Zhu, Jun Liu, Songcan Chen

Image & Vision Computing, 27, no. 9 (2009): 1358-1370

We applied the sub-pattern and random subspace technique for face recognition.

 

Face Recognition under Occlusions and Variant Expressions with Partial Similarity        [PDF]        [Bib \cite{Tan:tfs:2009}]

Xiaoyang Tan, Songcan Chen, Zhihua Zhou, and Jun Liu

IEEE Transactions on Information Forensics & Security, 4, no.2 (2009), 217-230

We proposed the partial similarity for face recognition.

 

[Go to top]


2008

Fractional order Singular Value Decomposition Representation for Face Recognition        [PDF]        [Bib \cite{Liu:pr:fsvd:2008}]

Jun Liu, Songcan Chen and Xiaoyang Tan

Pattern Recognition 41, no. 1 (2008): 378-395

Face images can be represented as a natural two-dimensional matrix.

How about the influence of the facial expression on the leading singular vectors?

Fractional order Singular Value Decomposition?

 

A Study on Three Linear Discriminant Analysis Based Methods in Small Sample Size Problem        [PDF]        [Bib \cite{Liu:pr:lda:2008}]

Jun Liu, Songcan Chen and Xiaoyang Tan

Pattern Recognition, 41, no. 1 (2008): 102-116

There are several different advances of Linear Discriminant Analysis in the scenario of Small Sample Size problem.

Are they related? What is the difference? Where one method can perform better?

 

Pattern Representation in Feature Extraction and Classification- Matrix Versus Vector        [PDF]        [Bib \cite{Wang:tnn:2008}]

Zhe Wang, Songcan Chen, Jun Liu, and Daoqiang Zhang

IEEE Transactions on Neural Networks, 19, no. 5, (2008):758-769

What is the difference and relationship between the classifier built based on the matrix and vector representations?

 

[Go to top]


2007

Comments on "Efficient and Robust Feature Extraction by Maximum Margin Criterion"        [PDF]        [Bib \cite{Liu:tnn:2007}]

Jun Liu, Songcan Chen, Xiaoyang Tan and Daoqiang Zhang

IEEE Transactions on Neural Networks, 18, no. 6 (2007): 1862-1864

 

Efficient Pseudo-Inverse Linear Discriminant Analysis and Its Nonlinear Form for Face Recognition        [PDF]        [Bib \cite{Liu:ijprai:2007}]

Jun Liu, Songcan Chen, Xiaoyang Tan and Daoqiang Zhang

International Journal of Pattern Recognition and Artificial Intelligence, 21, no. 8 (2007): 1265-1278

 

Single Image Subspace for Face Recognition        [PDF]        [Bib \cite{Liu:amfg:2007}]

Jun Liu, Songcan Chen, Zhihua Zhou, and Xiaoyang Tan

IEEE International Workshop on Analysis and Modeling of Faces and Gestures (AMFG, In conjunction with ICCV) 2007

Build a subspace for each image using different filtered images?

Quite good results on well known face data sets such as FERET, AR, and Extended YALE.

 

[Go to top]


2006

Discriminant Common Vectors versus Neighbourhood Components Analysis and Laplacianfaces: A Comparative Study in Small Sample Size Problem        [PDF]        [Bib \cite{Liu:ivc:2006}]

Jun Liu and Songcan Chen

Image & Vision Computing 24, no. 3 (2006): 249-262

What is the difference and relationship among these methods?

Theoretical and empirical comparison are given.

 

Non-Iterative Generalized Low Rank Approximation of Matrices        [PDF]        [Bib \cite{Liu:prl:2006}]

Jun Liu and Songcan Chen

Pattern Recognition Letters 27, no. 9 (2006): 1002-1008

Is a non-iterative generalized low rank approximation of matrices possible?

How to determine the values of the parameters?

 

Sub-Intrapersonal Space Analysis for Face Recognition        [PDF]        [Bib \cite{Tan:neuro:2006}]

Xiaoyang Tan, Jun Liu and Songcan Chen

Neurocomputing 69, no. 13-15 (2006): 1796-1801

We proposed a sub-intrapersonal space method for face recognition.

 

Learning Non-Metric Partial Similarity Based on Maximal Margin Criterion        [PDF]        [Bib \cite{Tan:cvpr:2006}]

Xiaoyang Tan, Songcan Chen, Zhihua Zhou and Jun Liu

IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 2006

We proposed the partial similarity for face recognition.

 

Recognition From a Single Sample Per Person With Multiple SOM Fusion        [PDF]        [Bib \cite{Tan:isnn:2006}]

Xiaoyang Tan, Jun Liu and Songcan Chen

International Symposium on Neural Networks (ISNN) 2006

 

[Go to top]


2005

Resampling LDA/QR and PCA+LDA for Face Recognition        [PDF]        [Bib \cite{Liu:ajcai:2005}]

Jun Liu and Songcan Chen

Australian Joint Conference on Artificial Intelligence (AJCAI) 2005

 

Weighted SOM-Face: Selecting Local Features for Recognition From Individual Face Image        [PDF]        [Bib \cite{Tan:ideal:2005}]

Xiaoyang Tan, Jun Liu and Songcan Chen

Intelligent Data Engineering and Automated Learning (IDEAL) 2005

 

Representing Image Matrices: Eigenimages Versus Eigenvectors        [PDF]        [Bib \cite{Zhang:isnn:2005}]

Daoqiang Zhang, Songcan Chen and Jun Liu

International Symposium on Neural Networks (ISNN) 2005

 

[Go to top]


2004

Progressive Principal Component Analysis        [PDF]        [Bib \cite{Liu:isnn:2004}]

Jun Liu, Songcan Chen and Zhihua Zhou

International Symposium on Neural Networks (ISNN) 2004

 

Making FLDA Applicable to Face Recognition With One Sample Per Person        [PDF]        [Bib \cite{Chen:pr:2004}]

Songcan Chen, Jun Liu and Zhihua Zhou

Pattern Recognition 37, no. 7 (2004): 1553-1555

 

[Go to top]