LPCVC  2023

Seattle, Washington, USA

Seattle, USA
Date: 2020/06/15 8:30AM-5:00PM

2020 Low Power Computer Vision Competition Winners Announcement

LPCVC 2020 Overview

The 2020 CVPR Workshop of Low-Power Computer Vision Challenge was succesfully hosted on 15 June 2020 in Seattle, USA. This workshop featured the winners of the past Low-Power Image Recognition Challenges, invited speakers from academia and industry, as well as presentations of accepted papers.

The LPCVC features 3 tracks sponsored by Facebook, Xilinx, and Google will be hosted online. The challenge will be open for submission from 7/1/2020 - 7/31/2020. More details are posted below. Please join the online discussion forum lpcvc.slack.com.


How to participate: Login on this website to participate in the following trakcs. A registered account is free to participate any tracks. Detailed instruction can be found on the corresponding information page.

Online Track - UAV Video

Optical character recognition of characters in video captured by an unmanned aerial vehicle (UAV).


Online Track - FPGA Image

Accurate and low-power classification of objects from 1000 ImageNet categories.


Online Track - OVIC Image

A competition for real-time detection and classification.


Program Recordings

06:00Organizers and Sponsors’ Welcome
  1. Terence Martinez, IEEE
  2. Yung-Hsiang Lu, Purdue
  3. Yiran Chen, Duke
  4. Ming-Ching Chang, SUNY Albany
  5. Joe Spisak, Facebook (recorded)
  6. Bo Chen, Google (recorded)
  7. Ashish Sirasao, Xilinx (recorded)
  8. Yao-Wen Chang, IEEE CEDA (recorded)
  9. Yen-Kuang Chen, IEEE CASS (recorded)
  10. ELan Microelectronics (recorded)
  11. Mediatek (recorded)
06:05History of Low-Power Computer Vision Challenge, Terence Martinez, IEEE]
06:10 Paper Presentations:
  1. Challenges in Energy-Efficient Deep Neural Network Training with FPGA] , Yudong Tao (University of Miami)*; Ma Rui (University of Miami); Mei-Ling Shyu (University of Miami); Shu-Ching Chen (Florida International University)
  2. CSPNet: A New Backbone that can Enhance Learning Capability of CNN] , Chien-Yao Wang (Institute of Information Science, Academia Sinica)*; Hong-Yuan Mark Liao (Institute of Information Science, Academia Sinica, Taiwan); Yueh-Hua Wu (National Taiwan University / Academia Sinica); Ping-Yang Chen (Department of Computer Science, National Chiao Tung University); Jun-Wei Hsieh (College of Artificial Intelligence and Green Energy ); I-Hau Yeh (Elan Microelectronics Corporation)
  3. Recursive Hybrid Fusion Pyramid Network for Real-Time Small Object Detection on Embedded Devices] , Ping-Yang Chen (Department of Computer Science, National Chiao Tung University); Jun-Wei Hsieh (College of Artificial Intelligence and Green Energy )*; Chien-Yao Wang (Institute of Information Science, Academia Sinica); Hong-Yuan Mark Liao (Institute of Information Science, Academia Sinica, Taiwan)
  4. Enabling Incremental Knowledge Transfer for Object Detection at the Edge] , Mohammad Farhadi Bajestani (Arizona state university)*; Mehdi Ghasemi (Arizona state university); Sarma Vrudhula (Arizona State university); Yezhou Yang (Arizona State University)
  5. Effective Deep-Learning-Based Depth Data Analysis on Low-Power Hardware for Supporting Elderly Care] , Christopher Pramerdorfer (Computer Vision Lab - TU Wien)*; Martin Kampel (Vienna University of Technology, Computer Vision Lab); Rainer Planinc (CogVis)
07:00 Poster Presentations:
  1. Enabling monocular depth perception at the very edge] , Valentino Peluso (Politecnico di Torino); Antonio Cipolletta (Politecnico di Torino); Andrea Calimera (Politecnico di Torino)*; Matteo Poggi (University of Bologna); Fabio Tosi (University of Bologna); Filippo Aleotti (University of Bologna); Stefano Mattoccia (University of Bologna)
  2. A Distributed Hardware Prototype Targeting Distributed Deep Learning for On-device Inference] , Allen J Farcas (University of Texas at Austin); Radu Marculescu (University of Texas at Austin)*; Kartikeya Bhardwaj (); Guihong Li (UT)
07:10QA, Moderator: Ming-Ching Chang, SUNY Albany]
07:30Invited Speeches:
  1. Yiran Chen, Duke, Finding a Sparse Neural Network Model
  2. Philip Lapsley, Embedded Vision Alliance, Business Opportunities
  3. Carole-Jean Wu, Facebook, MLPerf
08:15QA, Moderator: Carole-Jean Wu, Facebook
08:30Invited Speeches:
  1. Evgeni Gousev, Qualcomm, TinyML
  2. Vikas Chandra, Facebook
  3. Song Han, MIT: Once-for-All: Train One Network and Specialize it for Efficient Deployment on Diverse Hardware Platforms
09:15QA, Moderator: Jaeyoun Kim, Google
09:30Invited Speech (Tools):
  1. Christian Keller, Facebook
  2. Ashish Sirasao, Xilinx
  3. Bo Chen, Google
10:15QA, Moderator: Joe Spisak, Facebook
10:30Panel: How can you build startups using low-power computer vision? Moderator: Song Han, MIT


  1. Philip Lapsley, Embedded Vision Alliance, Business Opportunities
  2. Evgeni Gousev, Qualcomm, TinyML.
  3. Carole-Jean Wu, Facebook, MLPerf
  4. Kurt Keutzer, Berkeley
  5. Mohammad Rastegari, Apple


NameRole and OrganizationEmail
Hartwig AdamGooglehadam@google.com
Mohamed M. Sabry AlyIEEE Council on Design Automation, Nanyang Technolgical Universitymsabry@ntu.edu.sg
Alexander C BergUniversity of North Carolinaaberg@cs.unc.edu
Ming-Ching ChangUniversity at Albany - SUNYmchang2@albany.edu
Bo ChenGooglebochen@google.com
Shu-Ching ChenFlorida International Universitychens@cs.fiu.edu
Yen-Kuang ChenIEEE Circuits and Systems Society, Alibabay.k.chen@ieee.org
Yiran ChenACM Speical Interest Group on Design Automation, Duke Universityyiran.chen@duke.edu
Chia-Ming ChengMediatekcm.cheng@mediatek.com
Bohyung HanSeoul National Universitybhhan@snu.ac.kr
Andrew HowardGooglehowarda@google.com
Xiao Humanage lpcv.ai website, Purdue studenthu440@purdue.edu
Christian KellerFacebookchk@fb.com
Jaeyoun KimGooglejaeyounkim@google.com
Mark LiaoAcademia Sinica, Taiwanliao@iis.sinica.edu.tw
Tsung-Han LinMediatekTsung-Han.Lin@mediatek.com
(contact) Yung-Hsiang LuChair, Purdueyunglu@purdue.edu
An LuoAmazonanluo@amazon.com
Terence MartinezIEEE Future Directions t.c.martinez@ieee.org
Ryan McBeeSouthwest Research Instituteryan.mcbee@swri.org
Andreas MoshovosUniversity of Toronto/Vector Institutemoshovos@ece.utoronto.ca
Naveen PurushothamXilinx Inc.npurusho@xilinx.com
Tim RogersPurduetimrogers@purdue.edu
Mei-Ling ShyuIEEE Multimedia Technical Committee, Miami Universityshyu@miami.edu
Ashish SirasaoXilinx Inc.asirasa@xilinx.com
Joseph SpisakFacebookjspisak@fb.com
George K. ThiruvathukalLoyola University Chiicagogkt@cs.luc.edu
Yueh-Hua WuAcademia Sinica, Taiwankriswu@iis.sinica.edu.tw
Junsong YuanUniversity at Buffalojsyuan@buffalo.edu
Anirudh Vegesanamanage lpcv.ai website, Purdue studentavegesan@purdue.edu
Carole-Jean WuFacebookcarolejeanwu@fb.com


  1. Q: Where can I sign the agreement form?

    A: You should sign the agreement form when submitting your solution to any track. Each account only needs to sign it once. The agreement form is a legal binding document, so please read it carefully.

  2. Q: How can we register as a team?

    A: One team only needs one registered account. You can indicate your team members when signing the agreement form.

  3. Q: How can I register for a specific track?

    A: Registered accounts are not limited to attend any track. Accounts will be considered as participating a track when submitting a solution.