### SCOPE: Scalable Composite Optimization for Learning on Spark

**2016-01-30**

1602.00133 | stat.ML

Many machine learning models, such as logistic regression~(LR) and support
vector machine~(SVM), can be formulated as composite optimization problems.
Recently, many distributed stochastic optimization~(DSO) methods have been
proposed to solve the large-scale composite optimization problems, which have
shown better performance than traditional batch methods. However, most of these
DSO methods are not scalable enough. In this paper, we propose a novel DSO
method, called \underline{s}calable \underline{c}omposite
\underline{op}timization for l\underline{e}arning~({SCOPE}), and implement it
on the fault-tolerant distributed platform \mbox{Spark}. SCOPE is both
computation-efficient and communication-efficient. Theoretical analysis shows
that SCOPE is convergent with linear convergence rate when the objective
function is convex. Furthermore, empirical results on real datasets show that
SCOPE can outperform other state-of-the-art distributed learning methods on
Spark, including both batch learning methods and DSO methods.

**Login to like/save this paper, take notes and configure your recommendations**

# Related Articles

**2015-12-13**

1512.04011 | cs.LG

Despite the importance of sparsity in many large-scale applications, there
are few methods for distr… show more

**2016-11-07**

1611.02189 | cs.LG

The scale of modern datasets necessitates the development of efficient
distributed optimization meth… show more

**2016-04-04**

1604.00981 | cs.LG

Distributed training of deep learning models on large-scale training data is
typically conducted wit… show more

**2015-12-13**

1512.04039 | cs.LG

With the growth of data and necessity for distributed optimization methods,
solvers that work well o… show more

**2017-12-20**

1712.07495 | cs.DC

We consider the problem of learning a high-dimensional but low-rank matrix
from a large-scale datase… show more

**2019-03-21**

1903.08857 | cs.DC

Motivated by recent developments in serverless systems for large-scale
machine learning as well as i… show more

**2015-02-12**

1502.03508 | cs.LG

Distributed optimization methods for large-scale machine learning suffer from
a communication bottle… show more

**2016-10-04**

1610.00970 | stat.ML

Stochastic optimization algorithms with variance reduction have proven
successful for minimizing lar… show more

**2019-01-16**

1901.05134 | math.OC

For optimization of a sum of functions in a distributed computing
environment, we present a novel co… show more

**2015-08-09**

1508.02087 | math.OC

We propose a new stochastic L-BFGS algorithm and prove a linear convergence
rate for strongly convex… show more

**2018-09-20**

1809.07599 | cs.LG

Huge scale machine learning problems are nowadays tackled by distributed
optimization algorithms, i.… show more

**2016-10-31**

1610.10060 | stat.ML

As the size of modern data sets exceeds the disk and memory capacities of a
single computer, machine… show more

**2019-03-12**

1903.04488 | cs.LG

Large-scale distributed training of neural networks is often limited by
network bandwidth, wherein t… show more

**2019-01-19**

1901.06587 | cs.LG

In this paper we focus on the problem of finding the optimal weights of the
shallowest of neural net… show more

**2019-01-10**

1901.03040 | cs.LG

Due to its efficiency and ease to implement, stochastic gradient descent
(SGD) has been widely used … show more

**2019-01-24**

1901.08689 | cs.LG

The stochastic variance-reduced gradient method (SVRG) and its accelerated
variant (Katyusha) have a… show more