[robotics-worldwide] [meetings] ICML / IJCAI / AAMAS 2018 Workshop on Towards learning with limited labels: Equivariance, Invariance, and Beyond

Rogerio Feris rogerioferis at gmail.com
Thu Apr 26 17:17:32 PDT 2018

Website: https://urldefense.proofpoint.com/v2/url?u=https-3A__sites.google.com_site_icml18limitedlabels_&d=DwIBaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=EuvZOvsxWMjgmuBTXUjYa6AUN-HshUFFouzxWZDE6uA&s=BBIcEIr_eV5AkNOQzy8Gof-FsqMKHwbMjf0bEOuG3XI&e=
Deep learning has shown impressive performance improvements on a variety of
tasks in vision, speech and language domains. However, a large amount of
labeled data is often needed to deliver these remarkable accuracy gains.
The goal of this workshop is to facilitate discussion on the role of
equivariance/ invariance of feature maps, including their synergies with
unsupervised/self-supervised learning methods that define and solve
auxiliary tasks on unlabeled data to learn representations (such as
auto-encoding, context prediction, predicting one modality from other,
etc.), towards reducing the dependence on labeled examples. Apart from the
group-theoretical notions of equivariance and invariance, the workshop also
welcomes contributions with a non-group-theoretical and relaxed notion of
equivariance and invariance with respect to a set of transformations
(possibly input specific and learned from data).

We welcome submissions on topics including (but not limited to):

* Learning desired equivariances / invariances for a given prediction task
from data (e.g., from massive amounts of unlabeled data, from auxiliary
labels, from multiple views, or from side-information when available)
* Learning transferable invariances: feature maps that are
equivariant/invariant with respect to domain- or task-specific nuisance
factors while being suitable for transfer learning (on a target domain or
* Learning equivariant / invariant feature maps from structure when it is
available (e.g., temporal ordering structure in videos)
* Incorporating known equivariances / invariances (from domain knowledge,
e.g., rotation and scale for images, permutations for sets, etc.) as
inductive biases
* Architectural priors for equivariant features
* Synergies with unsupervised / self-supervised feature learning methods
that define and solve auxiliary tasks on unlabeled data (such as
auto-encoding, context prediction, etc.) to learn representation maps
* Hierarchical representations with interplay between invariance and
* Visualizing/understanding the equivariances and invariances learned by
current popular deep neural net architectures

Submission instructions.
Submissions should be a maximum of 4 pages (excluding references) and
should not have been published/presented at other venues. However, work
currently under review but not accepted anywhere is welcome. The accepted
papers might be posted on the workshop website but the workshop will not
have archival proceedings. All accepted papers will be presented at the
poster sessions. Selected top submissions will also be given spotlight
presentation slots.

Rogerio S. Feris (IBM Research AI)
William T. Freeman (MIT)
Abhishek Kumar (IBM Research AI)
Youssef Mroueh (IBM Research AI)
Antonio Torralba (MIT)
Gregory W. Wornell (MIT)

More information about the robotics-worldwide mailing list