LPCVC 

2020 IEEE Low-Power Computer Vision Challenge

LPCVC 2020 Overview

The 2020 CVPR Workshop of Low-Power Computer Vision Challenge was succesfully hosted on 15 June 2020 in Seattle, USA. This workshop featured the winners of the past Low-Power Image Recognition Challenges, invited speakers from academia and industry, as well as presentations of accepted papers.

Seattle, Washington, USA

cvpr2020.thecvf.com

The LPCVC featured 3 tracks sponsored by Facebook, Xilinx, and Google was be hosted online from 7/1/2020 to 7/31/2020. More details are posted below. Please join the online discussion forum lpcvc.slack.com.

The 2020 IEEE Low-Power Computer Vision Challenge (LPCVC) concluded successfully on 2020/07/31, after a month of competition. Forty six teams submitted 378 solutions in four tracks. During the month-long challenge, many teams submitted multiple solutions and witnessed the scores becoming much higher. The 2020 LPCVC adds a video track and expands the Low-Power Image Recognition Challenge (LPIRC) started in 2015.


Tracks

Online Track - PyTorch UAV Video

Optical character recognition of characters in video captured by an unmanned aerial vehicle (UAV).

More

Online Track - FPGA Image

Accurate and low-power classification of objects from 1000 ImageNet categories.

More

Online Track - OVIC Image

A competition for real-time detection and classification.

More

 

For more information, go to the competition introduction page. To see a detailed score breakdown for each submission, go to the leaderboard.

 

PyTorch UAV Video Track

This track requires that participating teams recognize English letters or numbers in the signs in the video captured by an unmanned aerial vehicle (UAV). The following two video frames show examples of such a challenge. The challenge appears in the form of question-answer pairs. A successful solution has to answer the English letters or numbers appearing in the same frame as the questions. For example, if the question is “RESTRICTED AREA”, the answer should be “B056 ELEVATOR CONTROL ELEVATOR PERSONNEL ONLY”. If the question is “CONFERENCE”, the answer should be “122”. The solutions are case insensitive. The solutions must use PyTorch running on Raspberry Pi 3B+.



Eleven teams submitted 84 solutions to this track. The winners of this tracks are:

PrizeTeamOrganizationMemberRepresentative
FirstLPNetByteDance Inc
  1. Xuefeng Xiao
  2. Xing Wang
  3. Li Lin
  4. Tianyu Zhao
  5. Ni Zhuang
  6. Huixia Li
  7. Xin Xia
  8. Can Huang
  9. Linfu Wen
Xuefeng Xiao (xiaoxuefeng.ailab@bytedance.com)
SecondTAMU-KWAITexas A & M University and Kwai Inc
  1. Yunhe Xue*
  2. Zhenyu Hu*
  3. Rahul Sridhar*
  4. Zhenyu Wu*
  5. Pengcheng Pi
  6. Jiayi Shen
  7. Jianchao Tan
  8. Xiangru Lian
  9. Zhangyang Wang
  10. Ji Liu
Zhenyu Wu (wuzhenyu_sjtu@tamu.edu)
ThirdC408Stony Brook University (SUNY Korea)
  1. Seonghwan Jeong
  2. Jay Hoon Jung
  3. YoungMin Kwon
Seonghwan Jeong (seonghwan.jeong@stonybrook.edu)

 

FPGA Image Track

This track uses Ultra96-V2 + Xilinx® Deep Learning Processor Unit (DPU) running software PYNQ + Vitis AI. The challenge recognizes objects in images. Nineteen teams submitted 115 solutions. The winners of this track are:

PrizeTeamOrganizationMemberRepresentative
FirstOnceForAllMIT HAN Lab
  1. Zhekai Zhang*
  2. Han Cai*
  3. Song Han
  4. Yubei Chen
  5. Jeffrey Pan
  6. Ligeng Zhu
  7. Tianzhe Wang
  8. Song Han
Zhekai Zhang ((zhangzk@mit.edu))
SecondMAXXNational Chiao Tung University, Taiwan
  1. Shih-Yu Wei
  2. Xuan-Hong Li
  3. Yu-Da Ju
  4. Juinn-Dar Huang
Xuan-Hong Li (nonelee.ee05@g2.nctu.edu.tw)
ThirdWaterSKLCA, Institute of Computing Technology, CAS
  1. Shengwen Liang
  2. Rick Lee
  3. Ying Wang
  4. Cheng Liu
  5. Huawei Li
Shengwen Liang (liangshengwen@ict.ac.cn)

 

OVIC Tensorflow Track

The OVIC Tensorflow track has three different challenges:
  • Interactive object detection: This category focuses on COCO detection models operating at 30 ms / image on a Pixel 4 smartphone (CPU).
  • Real-time image classification on Pixel 4: This category focuses on Imagenet classification models operating at 10 ms / image on a Pixel 4 smartphone (CPU).
  • Real-time image classification on LG G8: This category focuses on Imagenet classification models operating at 7 ms / image on a LG G8 smartphone (DSP).

Sixteen teams submitted 179 solutions to these tracks. The winners of these tracks are:

Interactive Object Detection

PrizeTeamOrganizationMemberRepresentative
FirstOnceForAllMIT HAN Lab
  1. Yubei Chen*
  2. Jeffrey Pan*
  3. Han Cai
  4. Zhekai Zhang
  5. Tianzhe Wang
  6. Ligeng Zhu
  7. Song Han
Yubei Chen (yubeic@berkeley.edu)
SecondSjtuBicaslShanghai Jiao Tong University
  1. Lining Hu
  2. Xinzi Xu
  3. Yongfu Li
  4. Weihong Yan
  5. Xiaocui Li
  6. Yuxin Ji
Xinzi Xu (xu_xinzi@sjtu.edu.cn)
ThirdNovautoNovauto Inc
  1. Changcheng Tang
  2. Tianyi Lu
  3. Zhiyong Zhang
  4. Shuang Liang
  5. Tianchen Zhao
  6. Xuefei Ning
  7. Shiyao Li
  8. Hanbo Sun
  9. Shulin Zeng
Changcheng Tang (angcc1127@gmail.com)

 

Real-time Image Classification on Pixel 4

PrizeTeamOrganizationMemberRepresentative
FirstBAIDU&THUBaidu Inc & Tsinghua University
  1. Zerun Wang
  2. Teng Xi
  3. Dahan Gong
  4. Fei Tian
  5. Zizhou Jia
  6. Cheng Cui
  7. Xiaoyu Huang
  8. Zhihang Li
  9. Fan Yang
  10. Fukui Yang
  11. Tianxiang Hao
  12. Shengzhao Wen
  13. Huiyue Yang
  14. Gang Zhang
  15. Guiguang Ding
Zerun Wang (wangzrthu@gmail.com)
SecondimchinfeiBeihang University
  1. Fei Qin
  2. Yangyang Kuang
Fei Qin (64242200@qq.com)
ThirdLPNetByteDance Inc
  1. Xuefeng Xiao
  2. Jiashi Li
  3. Xin Xia
  4. Huixia Li
  5. Linfu Wen
Xuefeng Xiao (xiaoxuefeng.ailab@bytedance.com)

 

Real-time Image Classification on LG G8

PrizeTeamOrganizationMemberRepresentative
FirstLPNetByteDance Inc
  1. Xuefeng Xiao
  2. Jiashi Li
  3. Xin Xia
  4. Huixia Li
  5. Linfu Wen
Xuefeng Xiao (xiaoxuefeng.ailab@bytedance.com)
SecondimchinfeiBeihang University
  1. Fei Qin
  2. Yangyang Kuang
Fei Qin (64242200@qq.com)
Third (Tie)OnceForAllMIT HAN Lab
  1. Yubei Chen*
  2. Jeffrey Pan*
  3. Han Cai
  4. Zhekai Zhang
  5. Tianzhe Wang
  6. Ligeng Zhu
  7. Song Han
Yubei Chen (yubeic@berkeley.ed)
Third (Tie)FoxPandaNational Chiao Tung University and National Tsing Hua University.
  1. Chia-Hsiang Liu
  2. Wei-Xiang Guo
  3. Yuan-Yao Sung
  4. Yi Lee
  5. Kai-Chiang Wu
jacoblau.cs08g@nctu.edu.tw (jacoblau.cs08g@nctu.edu.tw)

LPCVC started in 2015 with the aim to identify the technologies for computer vision using energy efficiently. IEEE Rebooting Computing has been the primary financial sponsor and provides administrative support. A new IEEE technical committee has been created to coordinate future activities of low-power computer vision. People interested in joining the committee please contact Terence Martinez, Program Director Future Directions IEEE Technical Activities, t.c.martinez@ieee.org.

Sponsors

Program Recordings

TimeSpeaker
06:00Organizers and Sponsors’ Welcome
  1. Terence Martinez, IEEE
  2. Yung-Hsiang Lu, Purdue
  3. Yiran Chen, Duke
  4. Ming-Ching Chang, SUNY Albany
  5. Joe Spisak, Facebook
  6. Bo Chen, Google
  7. Ashish Sirasao, Xilinx
  8. Yao-Wen Chang, IEEE CEDA
  9. Yen-Kuang Chen, IEEE CASS
  10. ELan Microelectronics
06:05History of Low-Power Computer Vision Challenge, Terence Martinez, IEEE
06:10Paper Presentations:
  1. Challenges in Energy-Efficient Deep Neural Network Training with FPGA], Yudong Tao (University of Miami)*; Ma Rui (University of Miami); Mei-Ling Shyu (University of Miami); Shu-Ching Chen (Florida International University)
  2. CSPNet: A New Backbone that can Enhance Learning Capability of CNN], Chien-Yao Wang (Institute of Information Science, Academia Sinica)*; Hong-Yuan Mark Liao (Institute of Information Science, Academia Sinica, Taiwan); Yueh-Hua Wu (National Taiwan University / Academia Sinica); Ping-Yang Chen (Department of Computer Science, National Chiao Tung University); Jun-Wei Hsieh (College of Artificial Intelligence and Green Energy ); I-Hau Yeh (Elan Microelectronics Corporation)
  3. Recursive Hybrid Fusion Pyramid Network for Real-Time Small Object Detection on Embedded Devices], Ping-Yang Chen (Department of Computer Science, National Chiao Tung University); Jun-Wei Hsieh (College of Artificial Intelligence and Green Energy )*; Chien-Yao Wang (Institute of Information Science, Academia Sinica); Hong-Yuan Mark Liao (Institute of Information Science, Academia Sinica, Taiwan)
  4. Enabling Incremental Knowledge Transfer for Object Detection at the Edge], Mohammad Farhadi Bajestani (Arizona state university)*; Mehdi Ghasemi (Arizona state university); Sarma Vrudhula (Arizona State university); Yezhou Yang (Arizona State University)
  5. Effective Deep-Learning-Based Depth Data Analysis on Low-Power Hardware for Supporting Elderly Care], Christopher Pramerdorfer (Computer Vision Lab - TU Wien)*; Martin Kampel (Vienna University of Technology, Computer Vision Lab); Rainer Planinc (CogVis)
07:00 Poster Presentations:
  1. Enabling monocular depth perception at the very edge], Valentino Peluso (Politecnico di Torino); Antonio Cipolletta (Politecnico di Torino); Andrea Calimera (Politecnico di Torino)*; Matteo Poggi (University of Bologna); Fabio Tosi (University of Bologna); Filippo Aleotti (University of Bologna); Stefano Mattoccia (University of Bologna)
  2. A Distributed Hardware Prototype Targeting Distributed Deep Learning for On-device Inference], Allen J Farcas (University of Texas at Austin); Radu Marculescu (University of Texas at Austin)*; Kartikeya Bhardwaj (); Guihong Li (UT)
07:10QA, Moderator: Ming-Ching Chang, SUNY Albany
07:30Invited Speeches:
  1. Yiran Chen, Duke, Finding a Sparse Neural Network Model
  2. Philip Lapsley, Embedded Vision Alliance, Business Opportunities
  3. Carole-Jean Wu, Facebook, MLPerf
08:15QA, Moderator: Carole-Jean Wu, Facebook
08:30Invited Speeches:
  1. Evgeni Gousev, Qualcomm, TinyML
  2. Vikas Chandra, Facebook
  3. Song Han, MIT: Once-for-All: Train One Network and Specialize it for Efficient Deployment on Diverse Hardware Platforms
09:15QA, Moderator: Jaeyoun Kim, Google
09:30Invited Speech (Tools):
  1. Christian Keller, Facebook
  2. Ashish Sirasao, Xilinx
  3. Bo Chen, Google
10:15QA, Moderator: Joe Spisak, Facebook
10:30Panel: How can you build startups using low-power computer vision? Moderator: Song Han, MIT

Pnaelists:

  1. Philip Lapsley, Embedded Vision Alliance, Business Opportunities
  2. Evgeni Gousev, Qualcomm, TinyML.
  3. Carole-Jean Wu, Facebook, MLPerf
  4. Kurt Keutzer, Berkeley
  5. Mohammad Rastegari, Apple
11:30Adjourn

Organizers

NameRole and OrganizationEmail
Hartwig AdamGooglehadam@google.com
Mohamed M. Sabry AlyIEEE Council on Design Automation, Nanyang Technolgical Universitymsabry@ntu.edu.sg
Alexander C BergUniversity of North Carolinaaberg@cs.unc.edu
Ming-Ching ChangUniversity at Albany - SUNYmchang2@albany.edu
Bo ChenGooglebochen@google.com
Shu-Ching ChenFlorida International Universitychens@cs.fiu.edu
Yen-Kuang ChenIEEE Circuits and Systems Society, Alibabay.k.chen@ieee.org
Yiran ChenACM Speical Interest Group on Design Automation, Duke Universityyiran.chen@duke.edu
Chia-Ming ChengMediatekcm.cheng@mediatek.com
Bohyung HanSeoul National Universitybhhan@snu.ac.kr
Andrew HowardGooglehowarda@google.com
Xiao Humanage lpcv.ai website, Purdue studenthu440@purdue.edu
Christian KellerFacebookchk@fb.com
Jaeyoun KimGooglejaeyounkim@google.com
Mark LiaoAcademia Sinica, Taiwanliao@iis.sinica.edu.tw
Tsung-Han LinMediatekTsung-Han.Lin@mediatek.com
(contact) Yung-Hsiang LuChair, Purdueyunglu@purdue.edu
An LuoAmazonanluo@amazon.com
Terence MartinezIEEE Future Directions t.c.martinez@ieee.org
Ryan McBeeSouthwest Research Instituteryan.mcbee@swri.org
Andreas MoshovosUniversity of Toronto/Vector Institutemoshovos@ece.utoronto.ca
Naveen PurushothamXilinx Inc.npurusho@xilinx.com
Tim RogersPurduetimrogers@purdue.edu
Mei-Ling ShyuIEEE Multimedia Technical Committee, Miami Universityshyu@miami.edu
Ashish SirasaoXilinx Inc.asirasa@xilinx.com
Joseph SpisakFacebookjspisak@fb.com
George K. ThiruvathukalLoyola University Chiicagogkt@cs.luc.edu
Yueh-Hua WuAcademia Sinica, Taiwankriswu@iis.sinica.edu.tw
Junsong YuanUniversity at Buffalojsyuan@buffalo.edu
Anirudh Vegesanamanage lpcv.ai website, Purdue studentavegesan@purdue.edu
Carole-Jean WuFacebookcarolejeanwu@fb.com

FAQ

  1. Q: Where can I sign the agreement form?

    A: You should sign the agreement form when submitting your solution to any track. Each account only needs to sign it once. The agreement form is a legal binding document, so please read it carefully.

  2. Q: How can we register as a team?

    A: One team only needs one registered account. You can indicate your team members when signing the agreement form.

  3. Q: How can I register for a specific track?

    A: Registered accounts are not limited to attend any track. Accounts will be considered as participating a track when submitting a solution.