question archive Title Your Name / Student ID Summary (1-2 paragraphs) Briefly summarize the paper and its contributions in “your own words”

Title Your Name / Student ID Summary (1-2 paragraphs) Briefly summarize the paper and its contributions in “your own words”

Subject:MS Power PointPrice:18.99 Bought3

Title

Your Name / Student ID

Summary (1-2 paragraphs)

Briefly summarize the paper and its contributions in “your own words”.

Contributions (1-2 pages)

What are the major contributions of the paper(s), e.g., theoretical, methodological, algorithmic,  empirical contributions; bridging fields; or providing a critical analysis? In this section, you should  touch on each of the following dimensions:

• Prior art: What are the state-of-the-art results in this area? What are the critical limitations or challenges of existing work in this area prior to the publication of the paper(s)? What  challenges were addressed in the paper(s)?

• Main ideas/hypotheses: What are the key ideas or hypotheses that this work adopted to

overcome the aforementioned limitations? Please explain the main ideas in detail and in

“your own word”. You may use figures or diagrams to support your explanation.

• Results: What results do the paper(s) provide to support the main ideas/hypotheses? Why

did the authors choose to provide such results? What are the conclusions of the paper?

Strengths and weaknesses (1 page)

In your opinion, what are the strengths and weaknesses of the paper(s)? This is the place for you

to praise and to critique the paper. Please provide a thorough assessment of the strengths and

weaknesses, touching on each of the following dimensions:

• Clarity: Is the paper clearly written? Is it well organized? Does it adequately inform the

reader?

• Quality: Is the submission technically sound? Are claims well supported (e.g., by  theoretical analysis or experimental results)? Are the methods used appropriate?

• Significance: Are the results important? Does the paper address a difficult task in a better  way than previous work? Does it advance the state-of-the-art in a demonstrable way?

What is unique about the paper?

Questions (up to 1 page)

Please list any questions that you might want to ask the authors of this paper and suggestions

that you might want to give to the authors.

Limitations (up to 1 page)

What do you think are the limitations of the paper(s)? Have the authors adequately addressed the  limitations of their work? If not, you may provide constructive suggestions for improvement.

Final Discussion

You may summarize the paper(s) again here and then provide additional discussion on the topics that  you believe are important, e.g., potential future research directions and your personal experience.
Abstract

 

We are concerned with a worst-case scenario in model

generalization, in the sense that a model aims to perform

well on many unseen domains while there is only one single

domain available for training. We propose a new method

named adversarial domain augmentation to solve this Out-

of-Distribution (OOD) generalization problem. The key

idea is to leverage adversarial training to create “fictitious”

yet “challenging” populations, from which a model can

learn to generalize with theoretical guarantees. To facilitate

fast and desirable domain augmentation, we cast the model

training in a meta-learning scheme and use a Wasserstein

Auto-Encoder (WAE) to relax the widely used worst-case

constraint. Detailed theoretical analysis is provided to tes-

tify our formulation, while extensive experiments on multi-

ple benchmark datasets indicate its superior performance

in tackling single domain generalization.

 

1. Introduction

 

Recent years have witnessed rapid deployment of ma-

chine learning models for broad applications [17, 42, 3, 60].

A key assumption underlying the remarkable success 1s that

the training and test data usually follow similar statistics.

Otherwise, even strong models (e.g., deep neural networks)

may break down on unseen or Out-of-Distribution (OOD)

test domains [2]. Incorporating data from multiple train-

ing domains somehow alleviates this issue [21], however,

this may not always be applicable due to data acquiring

budget or privacy issue. An interesting yet seldom inves-

tigated problem then arises: Can a model generalize from

one source domain to many unseen target domains? In other

words, how to maximize the model generalization when

there is only a single domain available for training?

 

The discrepancy between source and target domains,

also known as domain or covariate variant [48], has been

intensively studied in domain adaptation [30, 33, 57, 24]

and domain generalization [32, 9, 22, 4]. Despite of their

 

'The source code and pre-trained models are publicly available at:

https://github.com/joffery/M-ADA.

 

i)

‘yi

 

Long Zhao

Rutgers University

1lz31l@cs.rutgers.edu

 

Xi Peng

University of Delaware

xipeng@udel.edu

 

oy ESN

joy

i

I

'

1

1

 

S: Source domain(s)

 

T: Target domain(s) ‘ .

s x

 

/ A ‘ .

 

‘ v

 

s

IN \

rs, ‘

mS }

i

 

wren

-” ~

 

\ a“ 4

‘ 4

Se

 

SIT < Ti

(c) FUT > Ty;

Figure |. The domain discrepancy: (a) domain adaptation, (b) do-

main generalization, and (c) single domain generalization.

 

 

 

~~.”

 

(a) SOT <T

 

(b) Sj) US; > Si;

 

various success 1n tackling ordinary domain discrepancy 1s-

sue, however, we argue that existing methods can hardly

succeed in the aforementioned single domain generalization

problem. As illustrated in Fig. 1, the former usually expects

the availability of target domain data (either labeled or unla-

beled); While the latter, on the other hand, always assumes

multiple (rather than one) domains are available for train-

ing. This fact emphasizes the necessity to develop a new

learning paradigm for single domain generalization.

 

In this paper, we propose adversarial domain augmenta-

tion (Sec. 3.1) to solve this challenging task. Inspired by the

recent success of adversarial training [35, 50, 49, 36, 24],

we cast the single domain generalization problem in a

worst-case formulation [44, 20]. The goal is to use sin-

gle source domain to generate “fictitious” yet “challenging”

populations, from which a model can learn to generalize

with theoretical guarantees (Sec. 4).

 

However, technical barriers exist when applying adver-

sarial training for domain augmentation. On the one hand,

it is hard to create “fictitious” domains that are largely dif-

ferent from the source, due to the contradiction of seman-

tic consistency constraint [11] in worst-case formulation.

On the other hand, we expect to explore many “fictitious”

domains to guarantee sufficient coverage, which may re-

sult in significant computational overhead. To circumvent

these barriers, we propose to relax the worst-case constraint

(Sec. 3.2) via a Wasserstein Auto-Encoder (WAE) [52] to

encourage large domain transportation in the input space.

Moreover, rather than learning a series of ensemble mod-

els [56], we organize adversarial domain augmentation via

meta-learning [6] (Sec. 3.3), yielding a highly efficient

model with improved single domain generalization.
 

Option 1

Low Cost Option
Download this past answer in few clicks

18.99 USD

PURCHASE SOLUTION

Option 2

Custom new solution created by our subject matter experts

GET A QUOTE

rated 5 stars

Purchased 3 times

Completion Status 100%

Related Questions