08:30-10:10, October 20 (Wednesday), 2021
Time | Invited Talk | Title | Invited Speakers |
---|---|---|---|
08:30-10:10 | 1 | THz MIMO Communications | Prof. Namyoon Lee POSTECH, Korea |
2 | Resilient Cross-layer mmWave Network Design through Coordination | Prof. Parth Pathak, George Mason University, USA | |
3 | Next generation multiple access: Reboot of S-ALOHA with online control | Prof. Hu Jin Hanyang University, Korea |
Invited Talk 1: “THz MIMO Communications Perspective”
Prof. Namyoon Lee, POSTECH, Korea
Abtract:
A relentless trend in wireless communications is the hunger for bandwidth, and fresh bandwidth is only to be found at ever higher frequencies. While 5G systems are seizing the mmWave band, the attention of researchers is shifting already to the terahertz range. In that distant land of tiny wavelengths, antenna arrays can serve for more than power-enhancing beamforming. Defying lower-frequency wisdom, spatial multiplexing becomes feasible even in line-of-sight conditions. In this talk, I will review the underpinnings of this phenomenon, and present recent results on the ensuing information-theoretic capacity. Reconfigurable array architectures are put forth that can closely approach such capacity, practical challenges are discussed, and supporting experimental evidence is presented.
Biography:
- NAMYOON LEE [S’11, M’14, SM’20] (nylee@postech.ac.kr) received a Ph.D. degree from the University of Texas at Austin in 2014. He was with Communications and Network Research Group, Samsung Advanced Institute of Technology, Korea, in 2008–2011 and also with Wireless Communications Research, Intel Labs, Santa Clara, California, in 2015–2016. He is currently an associate professor at POSTECH. He was a recipient of the 2016 IEEE ComSoc Asia–Pacific Outstanding Young Researcher Award, the 2020 IEEE Best YP Award (Outstanding Nominee), and the 2021 IEIE-IEEE Joint Award for Young Scientist and Engineer. He is currently an associate editor for IEEE Transactions on Wireless Communications, IEEE Transactions on Communications, and IEEE Transactions on Vehicular Technology.
Invited Talk 2: “Resilient Cross-layer mmWave Network Design through Coordination”
Prof. Parth Pathak, George Mason University, USA
Abtract:
Millimeter-wave (mmWave) wireless is poised to revolutionize the next generation of wireless networking and sensing systems with its large available bandwidth and multi-gigabit link rates. Even as mmWave wireless is being commercialized, coordinated and cross-layer solutions are needed to address the fundamental challenges of blockage and mobility in densely deployed next generation networks. In this talk, I will present our recent progress on addressing these issues. First, I will introduce mmChoir, a multi-point transmission framework that utilizes joint transmissions from multiple APs to provide proactive blockage resilience to clients. I will then discuss a coordinate beamforming architecture that aims at reduces beamforming overhead through network-level coordination in densely deployed mmWave networks. Lastly, I will present our cross-layer mmWave immersive content streaming solution that exploits content similarity along with multi-beam adaptation and coordination to realize a reliable video quality delivery over mmWave. We will conclude with a discussion on our ongoing work on low-power commodity mmWave wireless backscattering systems and their applications.
Biography:
- Parth Pathak is an assistant professor In the Computer Science Department at George Mason University. His research interests include design and development of wireless and mobile computing systems including next-gen 5G and beyond networks, IoT systems, wireless sensing and ubiquitous computing. He received his Ph.D. degree in computer science from North Carolina State University in 2012. He was a post-doctoral scholar at University of California, Davis until 2016 before joining George Mason University. He has published 30+ papers in top-tier networking conferences and journals including two best paper awards in IFIP Networking 2014 and IEEE DSAA 2019.
Invited Talk 3: “Next generation multiple access: Reboot of S-ALOHA with online control”
Prof. Hu Jin, Hanyang University, Korea
Abtract:
For slotted random access systems, the slotted ALOHA protocol provides the maximum throughput of 0.368 (packets/slot) while in the category of splitting (or tree) algorithms, the maximum achievable throughput can reach up to 0.487 with the first-come first-serve (FCFS) algorithm. However, those maximum throughputs are hard to be achieved in practical systems especially when the network population changes over time. In this talk, we discuss the role of real-time/online control of random access in achieving those maximum throughputs. In addition, as the 5G mobile communication system and beyond still adopt random access procedure for establishing initial connections to the base station which is more important for the machine type communications and Internet of Things (IoT) applications, we further discuss the application of the proposed online control algorithms in cellular systems.
Biography:
- Hu Jin received the B.E. degree from the University of Science and Technology of China, China, in 2004, and the M.S. and Ph.D. degrees from the Korea Advanced Institute of Science and Technology, South Korea, in 2006 and 2011, respectively. From 2011 to 2013, he was a Postdoctoral Fellow with The University of British Columbia, Canada. Since 2014, he has been with the Division of Electrical Engineering, Hanyang University, Ansan, South Korea, where he is currently is an Associate Professor. His research interests include medium-access control and radio resource management for random access networks and scheduling systems considering advanced signal processing and queuing performance. Recently, his research is more focused on the real-time/online control of the random access in order to maximize the throughput and minimize the delay.
14:30-16:10, October 20 (Wednesday), 2021
Time | Invited Talk | Title | Invited Speakers |
---|---|---|---|
14:30-16:10 | 4 | Serverless Computing and Beyond for Computing-enabled 6G | Prof. Kyungyong Lee Kookmin University, Korea |
5 | Towards a Secure Cloud Radio LoRaWANs | Prof. Wen Hu UNSW Sydney, Australia |
|
6 | Low-latency and High-precision Packet Networking Technologies | Dr. Taesik Cheung, ETRI, Korea |
Invited Talk 4: “Serverless Computing and Beyond for Computing-enabled 6G”
Dr. Kyungyong Lee, Kookmin University, Korea
Abtract:
The advancement of cloud computing changes the way we develop software applications and maintain computing resources. The initial cloud computing service focused on helping developers to build highly available system from the perspective of fault-tolerance and scalability relying on virtualization, which is termed as IaaS. Since the launch of the first-generation cloud service, the cloud computing is evolving in the direction of hiding complex operations and management overhead. In such context, FaaS and various fully-managed cloud services open up opportunities for the serverless computing which frees developers from complex cloud resource management overhead. In this talk, the present discusses opportunities and challenges when developing cloud application using the serverless computing architecture. The presenter also covers the direction and opportunities of further advancement of the serverless computing in the edge-computing environment where limited computing resources are connected using very fast mobile network.
Biography:
- Kyungyong Lee is an Associate Professor in the Department of Computer Science at Kookmin University. His current research topic covers cloud computing, big data platforms, and large-scale distributed computing environment. He received the Ph.D. Degree in the Department of Electrical and Computer Engineering at the University of Florida. Before joining Kookmin University, he worked as a software development engineer at Amazon Web Services and HP Labs.
Invited Talk 5: “Towards a Secure Cloud Radio LoRaWANs”
Prof. Wen Hu, UNSW Sydney, Australia
Abtract:
LoRaWAN is an emerging technology of low-power wide-area networks to provide connectivity for Internet of Thing (IoT) devices. As the number of devices increases, the network suffers from scalability issues. Therefore, we design a cloud radio access network (C-RAN or Cloud-RAN) with multiple LoRaWAN gateways to address this problem. Specifically, we propose a compressive sensing-based algorithm to reduce the uplink bit rate between the gateways and the cloud server. Our evaluation shows that with four gateways up to 87.5% PHY radio samples can be compressed and 1.7x battery life for end devices can be achieved. To provide location information to the LoRaWAN end devices, we propose a novel algorithm to improve the resolution of the radio signals. The proposed algorithm synchronizes multiple non-overlapped communication channels by exploiting the unique features of the LoRaWAN radio to increase the overall bandwidth, and both the original and the conjugate of the physical layer to increase the number of multiple paths that it can resolve. Our evaluation shows that it can achieve median errors of 4.4 m and 2.4 m outdoors and indoors respectively. Finally, we introduce a novel algorithm to secure end devices by exploiting the LoRa radio channel models.
Biography:
- Wen Hu is an associate professor at School of Computer Science and Engineering, the University of New South Wales (UNSW). Much of his research career has focused on the novel applications, low-power communications, security and compressive sensing in sensor network systems and Internet of Things (IoT). Hu published regularly in the top-rated sensor network and mobile computing venues such as ACM/IEEE IPSN, ACM SenSys, ACM MobiCOM, ACM UbiCOMP, IEEE Infocom, ACM transactions on Sensor Networks (TOSN), IEEE Transactions on Mobile Computing (TMC), and Proceedings of the IEEE.
Hu is a senior member of ACM and IEEE and is an associate editor of ACM TOSN and the general chair of CPS-IoT Week 2020, as well as serves on the organizing and program committees of networking conferences including ACM/IEEE IPSN, ACM SenSys, ACM MobiCOM and ACM MobiSys.
Hu worked as the Chief Scientist (part time) in WBS Tech to commercialize his research results in smart buildings and IoT. He was a principal research scientist and research project leader at CSIRO Digital Productivity Flagship, and received his Ph.D from the UNSW.
Invited Talk 6: “Low-latency and High-precision Packet Networking Technologies”
Dr. Taesik Cheung, ETRI, Korea
Abtract:
It is expected that real-time, hyper-immersive interactive services such as AR/VR/XR and hologram communications, and high-precision vertical services such as remote control of robots, machines and drones will become prevail in the near future. To support time-sensitive services, network infrastructure needs to guarantee the bounded end-to-end latency in the delivery of packets and the minimized, or even no loss of packets. Providing high-precision control of latency is another important characteristic that the network should have, in order to support mission-critical and high-precision vertical services stably. Time-deterministic networking technologies such as TSN, DetNet, and MTN are considered as a possible solution to meet those requirements and being developed in the global standard bodies. This presentation briefly introduces those emerging technologies, compares their characteristics including pros and cons, and discusses their limitations and R&D issues that should be resolved.
Biography:
- Taesik Cheung received the B.S., M.S., and Ph.D. degrees in electronics engineering from Yonsei University, Seoul, South Korea. Since 2000, he has been with ETRI, where he was involved in the development of network systems such as Carrier Ethernet switches, flow QoS routers, and packet/optical integrated transport network systems. Since 2005, he has been participating in ITU-T and IETF, and contributed to the standardization of protection mechanisms for transport networks. He is the co-editor of ITU-T Rec. G.873.2, and G.808.2, and the co-author of IETF RFC 7271 and RFC 8234. He is currently serving as a director of Ultra-low Latency Network Research Section of ETRI. His current work focuses on deterministic packet networking technologies such as IEEE TSN and IETF DetNet.
08:30-10:10, October 21 (Thursday), 2021
Time | Invited Talk | Title | Invited Speakers |
---|---|---|---|
08:30-10:10 | 7 | LEO Satellite Internet for High-Speed Aerial Vehicles | Prof. Jihwan Choi KAIST, Korea |
8 | Autonomous Sensing with Millimeter-Wave Radar | Dr. Kun Qian University of California San Diego, USA |
|
9 | High-precision and scalable positioning with UWB | Dr. Haeyoung Jun Head of Service Standards Lab, Samsung Research, Korea |
Invited Talk 7: “LEO Satellite Internet for High-Speed Aerial Vehicles”
Prof. Jihwan Choi, KAIST, Korea
Abtract:
The 3rd Generation Partnership Project (3GPP) has included non-terrestrial networks (NTN) in the 5G New Radio (NR) standards and the mega-constellation low-Earth orbit (LEO) satellites are being deployed for the global broadband service. This talk will present an overview of the state-of-the-art LEO networks, key technologies for the LEO satellite Internet, and their applications for supporting high-speed aerial vehicles, such as urban air mobility (UAM).
Biography:
- Jihwan Choi received the Ph.D. degree in electrical engineering and computer science from the Massachusetts Institute of Technology (MIT), Cambridge, MA, USA. He is currently an Associate Professor at the Dept. of Aerospace Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea. He was with Marvell Semiconductor Inc., Santa Clara, CA, USA and with the Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, Korea. His research interests are in aerospace and wireless communications, and the applications of machine learning and deep learning. Dr. Choi is an Associate Editor for the IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS and an Editorial Board Member for Remote Sensing.
Invited Talk 8: “Autonomous Sensing with Millimeter-Wave Radar”
Dr. Kun Qian, University of California San Diego, USA
Abtract:
Emerging autonomous driving systems require the reliable perception of 3D surroundings. Unfortunately, current mainstream perception modalities, i.e., camera and Lidar, are vulnerable under challenging weather conditions where opaque particles distort lights and significantly reduce visibility. On the other hand, despite their all-weather operations, today’s vehicle Radars only have coarse resolution and are limited to location and speed detection. I will present two solutions that better exploit the Radar sensing capability in autonomous sensing tasks, especially under adverse weather conditions. The first solution generates higher-quality point clouds by enabling synthetic aperture radar (SAR) imaging on low-cost commodity vehicle Radars. The second solution is a deep fusion detector that takes advantage of complementary Lidar and Radar data for vehicle detection.
Biography:
- Kun Qian is a post-doctoral researcher in the Department of Electrical and Computer Engineering, University of California San Diego. He received his Ph.D. in 2019 in the School of Software, Tsinghua University. He received his B.E. in 2014 in Software Engineering from the School of Software, Tsinghua University. His research interests include mobile computing and wireless sensing. He has published over 20 papers in competitive conferences and journals.
Invited Talk 9: “High-precision and scalable positioning with UWB”
Dr. Haeyoung Jun, Head of Service Standards Lab, Samsung Research, Korea
Abtract:
The presentation focuses on the UWB technology, and introduces various aspects of the technology including algorithm and protocol, Global standards, industry trends and implementations.
Firstly, it introduces various indoor positioning technologies using radio-communication signals, and compare pros and cons. Especially, it explains why UWB has been attracting huge interest from various Global industry stakeholders. Secondly, it explains details about the UWB ranging and indoor positioning solutions using UWB. The UWB based indoor positioning technologies include multiple approaches using Two-way Ranging, Uplink TDoA (Time Difference of Arrival), and Downlink TDoA. Finally, the presentation also shares current status of industry standardization efforts for the UWB interoperability among different manufacturers as well as various service applications using UWB including indoor positioning applications.
Biography:
- Dr. Haeyoung Jun is a Principal Engineer & Director at Samsung Research. He is currently leading Service Standards Lab, which is responsible for advanced technology development and standardization of connectivity technologies such as Wi-Fi, Bluetooth and UWB.
He represented Samsung as a member of Board of Directors in multiple standards organizations such as WiGig Alliance, Car Connectivity Consortium, Wi-Fi Alliance and OCF. He also led foundation of several standards bodies, which Samsung has established, such as Alliance for Wireless Power, UHD Alliance and FiRa Consortium. He also worked as an initial Board member of those consortia.
He received Ph.D., M.S. and B.S. degrees from Seoul National University where he worked in the areas of GPS signal processing algorithms, software-based receiver technologies, and indoor positioning systems.
10:30-12:10, October 21 (Thursday), 2021
Time | Invited Talk | Title | Invited Speakers |
---|---|---|---|
10:30-12:10 | 10 | Addressing wireless blindspots using Transparent Antennas and Surfaces: Opportunities and Challenges | Prof. Wonbin Hong POSTECH, Korea |
11 | Enable 4K quality 3D Video Streaming | Prof. Jihoon Ryoo SUNY Korea (Stony Brook University), Korea |
|
12 | Memory Disaggregation and its Performance Enhancement Using SmartNICs | Prof. Youngbin Im UNIST, Korea |
Invited Talk 10: “Addressing wireless blindspots using Transparent Antennas and Surfaces: Opportunities and Challenges”
Prof. Wonbin Hong, POSTECH, Korea
Abtract:
Frequency spectrums are increasingly becoming diversified and are resorting to higher bands amid the explosive growth of wireless applications and services. The propagation path loss and material penetration loss tend to worsen as a function of the operating frequency spectrum. Naturally, despites its benefits and potential, future wireless networks are becoming more prone to wireless blindspots, which becomes critical for OPEX and CAPEX. In this talk, an approach based on transparent antennas and electromagnetic surfaces (e.g., RIS) will be discussed and exemplified followed with key challenges and potential strategies.
Biography:
- Prof. Wonbin Hong is currently a Mueunjae Chaired Professor at POSTECH (Pohang University of Science and Technology) since 2016. He was previously a Principal Engineer at Samsung Electronics from 2009-2016. Prof. Hong was the first to develop the world’s first mmWave 5G mobile antenna and Antenna-on-Display (AoD) technology, in which he twice received official commendations from the Ministry of Science and ICT, Korea. He holds a Ph.D. and Masters from University of Michigan and a B.S. from Purdue University.
Invited Talk 11: “Enable 4K quality 3D Video Streaming”
Prof. Jihoon Ryoo, SUNY Korea (Stony Brook University), Korea
Abtract:
Along with the recent enhancement of display technology, users demand a higher quality of streaming service, which escalates the bandwidth requirement. Considering the recent advent of high FPS (frame per second) 4K and 8K resolution 360° videos, such bandwidth concern further intensifies in 360° Virtual Reality (VR) content streaming even at a larger scale. However, the currently available bandwidth in most of the developed countries can hardly support the bandwidth required to stream such a scale of content. To address the mismatch between the demand on higher quality of streaming service and the saturated network improvement, we propose encoding algorithm that practically solves the mismatch by utilizing the characteristics of the human vision system (HVS). By pre-rendering a set of regions – where viewers are expected to fixate – on 360° VR content in higher quality than the other regions, new encoding algorithm improves viewers’ quality of perception (QoP) while reducing content size with geometry-based 360° content encoding. In our user experiment, we compare the performance of new algorithm to the existing 360° content-encoding techniques based on viewers’ head movement and eye gaze traces. To evaluate viewers’ QoP, we propose FoL (field of look) that captures viewers’ quality perception area in the visual focal field (8°) rather than a wide (around 90°) field of view (FoV). Results of our experimental 360° VR video streaming show that new algorithm achieves noticible PSNR improvement in FoL and FoV.
Biography:
- Jihoon Ryoo is an assistant professor at the Computer Science Department of SUNY Korea (State University of New York) – where he has been on the faculty since 2017. An applied computer scientist, Dr. Jihoon Ryoo’s research interests concern how the advanced technologies of mobile, embedded computing enhance our daily life. He and his collaborators are active inventors of new sensing and perception technologies in the field of wireless networks and computer vision. He did his research intern at Microsoft Research, Microsoft Research Asia, Bell Labs, and Motorola Solutions through his graduate years. He is also the co-founder of start-up IDCITI – an underground GPS service company.
Invited Talk 12: “Memory Disaggregation and its Performance Enhancement Using SmartNICs”
Prof. Youngbin Im, UNIST, Korea
Abtract:
Recently, disaggregated memory systems are gaining attention to meet the increasing memory requirements in data centers. Disaggregated memory system enables applications to use the memory of remote servers connected through the network. In addition, smartNICs are becoming popular in data centers to offload host CPUs’ load and use the saved CPU cycles for user applications.
In this talk, I will introduce several recent works on memory disaggregation, smartNICs and propose utilizing SmartNICs for improving the performance of the disaggregated memory system.
Biography:
- 2019.09 ~ present: Assistant Professor, Department of Computer Science and Engineering, UNIST
2015.03 ~ 2019.07: Postdoctoral Researcher, Computer Science, University of Colorado Boulder
08:30-10:10, October 22 (Friday), 2021
Time | Invited Talk | Title | Invited Speakers |
---|---|---|---|
08:30-10:10 | 13 | AI for Cybersecurity: Current Status and Future Directions | Prof. Peng Liu Pennsylvania State University, USA |
14 | Cyber/Physical Well-being Technology That Supports Human’s Bounded Rationality |
Tadashi Okoshi, Associate Professor, Faculty of Environment and Information Studies, Keio University, Japan |
|
15 | ICT adopted in healthcare | Prof. Dukyong Yoon Yonsei University College of Medicine, Korea |
Invited Talk 13: “AI for Cybersecurity: Current Status and Future Directions”
Prof. Peng Liu, Pennsylvania State University, USA
Abtract:
In this talk, I will provide an overview of AI for Cybersecurity, an emerging area of research in the field of cybersecurity. The overview will consist of the following parts: first, I will introduce the concept of AI for Cybersecurity. Second, I will review the current status of this emerging sub-field. Finally, I will point out several future directions.
Biography:
- Peng Liu received his BS and MS degrees from the University of Science and Technology of China, and his PhD from George Mason University in 1999. Dr. Liu is the Raymond G. Tronzo, MD Professor of Cybersecurity, founding Director of the Center for Cyber-Security, Information Privacy, and Trust, and founding Director of the Cyber Security Lab at Penn State University. His research interests are in all areas of computer security. He has published over 350 technical papers, including numerous papers on top conferences and journals. His research has been sponsored by NSF, ARO, AFOSR, DARPA, DHS, DOE, AFRL, NSA, TTC, CISCO, and HP.
Invited Talk 14: “Cyber/Physical Well-being Technology That Supports Human’s Bounded Rationality”
Prof. Tadashi Okoshi, Faculty of Environment and Information Studies, Keio University, Japan
Abtract:
As our daily lives have been drastically evolving not only in the real (physical) space but also in the cyber space, the concept of “well-being” has also been growing into several different areas our physical, mental, and social activities, in both spaces.
Recent information systems for supporting users’ physical and mental wellness/well-being have been developed based on a traditional architecture of the cyber-physical systems, which involves in sensing and recognition of human’s behavior and states, big data analysis mainly in the cloud side, and the information feedback and/or actuation back to the human user. However, in reality, our behavior will not be easily changed just by information presentation itself, typically through push-type notifications. We are witnessing many examples of it about people’s behavior during the COVID-19 pandemic.
We human is not always rational and rather emotional. According to behavior economics, the concept of “bounded rationality” has been studies to better handle such human nature. In this talk, I will introduce this bounded rationality concept along with our latest research on the information system that supports human user’s bounded rationality in several different aspects.
Biography:
- Tadashi Okoshi is Associate Professor in Faculty of Environment and Information Studies, Keio University. He is a computer scientist especially focusing on information and computing systems for supporting our life-long cyber-physical well-being. His broader research areas include mobile and ubiquitous computing systems, application and services, human computer interaction, behavior change and persuasive computing. His recent research works are on human attention management, mobile affective computing, and computing for well-being (WellComp).
He has served as organizing and program committee member of mobile and ubiquitous systems, and networking conferences and workshops. He sits on the editorial boards of ACM Proceedings on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT). He has been servicing as social media director of ACM SIGMOBILE since 2016. In 2019, he was awarded IPSJ Microsoft Faculty Award, an annual award for young researchers who have made outstanding international contributions to research and development in major areas of informatics.
He holds B.A. in Environmental Information (1998), Master of Media and Governance (2000) from Keio University, M.S. in Computer Science (2006) from Carnegie Mellon University, and Ph.D. in Media and Governance (2015) from Keio University, respectively. He also has over 7-year experiences of entrepreneurship, software architecting, product management, and project management in IT industries (Web2.0, blogging, social networking, and social media).
Invited Talk 15: “ICT adopted in healthcare”
Prof. Dukyong Yoon, Yonsei University College of Medicine, Korea
Abtract:
Machine learning-based artificial intelligence (AI) models that find patterns in data such as deep learning are the most widely studied and used in all fields. There is a variety of data to which AI can be applied in hospitals. In addition to structured data such as diagnosis, drug and treatment prescription, and laboratory test results, there is a large amount of unique natural language data such as admission and discharge records or pathology reports. In addition, various types of image data, and biosignal data such as an electrocardiogram are also one of important data in hospital. Since these data do not exist independently, but interact with each other, there are various hidden information that can be revealed through artificial intelligence. Although the basic principles of AI are similar, its application area can be diversified depending on what kind of data is applied to. In this presentation, I will focus on cases where AI was applied to data existing in hospitals, and examine the characteristics and possibilities of AI application in the medical field.
Biography:
- My research covers data science in the field of medicine—processing and analysis of structured and unstructured medical data. Because medical data comprise diverse data types (numeric, natural language, signal, and image), a multidisciplinary approach is needed. I am interested in discovering novel valuable information (novel features in biosignals or prescription patterns) in medical data. For that purpose, I use both traditional statistical methods and up-to-date artificial intelligence methods. Using such information, our laboratory develops computational models to detect or predict clinical outcomes (clinical conditions or new drug effects) and evaluates the implications of these models in clinical practice. The basic features extracted from the medical data, the computational models, and experience in applying the models in clinical practice will promote digital healthcare, software as a medical device, and digital therapeutics.
10:30-12:10, October 22 (Friday), 2021
Time | Invited Talk | Title | Invited Speakers |
---|---|---|---|
10:30-12:10 | 16 | Learning for Learning: Predictive Online Control of Federated Learning with Edge Provisioning | Dr. Lei Jiao, University of Oregon, USA |
17 | Artificial tactile sensing system mimicking human tactile cognition | Prof. Ji-Woong Choi DGIST, Korea |
|
18 | The participatory AR platform and applications | Dr. Sung-Uk Jung Principal Researcher, ETRI, Korea |
Invited Talk 16: “Learning for Learning: Predictive Online Control of Federated Learning with Edge Provisioning“
Dr. Lei Jiao, University of Oregon, USA
Abtract:
Operating federated learning optimally over distributed cloud-edge networks is a non-trivial task, which requires to manage data transference from user devices to edges, resource provisioning at edges, and federated learning between edges and the cloud. We formulate a non-linear mixed-integer program, minimizing the long-term cumulative cost of such a federated learning system while guaranteeing the desired convergence of the machine learning models being trained. We then design a set of novel polynomial-time online algorithms to make adaptive decisions by solving continuous solutions and converting them to integers to control the system on the fly, based only on the predicted inputs about the dynamic and uncertain cloud-edge environments via online learning. We rigorously prove the competitive ratio, capturing the multiplicative gap between our approach using predicted inputs and the offline optimum using actual inputs. Extensive evaluations with real-world training datasets and system parameters confirm the empirical superiority of our approach over multiple state-of-the-art algorithms.
Biography:
- Lei Jiao received the Ph.D. degree in computer science from the University of Göttingen, Germany. He is currently an assistant professor at the Department of Computer and Information Science, University of Oregon, USA. Previously he worked as a member of technical staff at Alcatel-Lucent/Nokia Bell Labs in Dublin, Ireland and as a researcher at IBM Research in Beijing, China. He is interested in the mathematics of optimization, control, learning, and economics, applied to computer and telecommunication systems, networks, and services. He publishes papers in journals such as IEEE/ACM ToN, IEEE TPDS, and IEEE JSAC, and in conferences such as INFOCOM, MOBIHOC, ICNP, and ICDCS. He is a recipient of the NSF CAREER Award. He also received the Best Paper Awards of IEEE LANMAN 2013 and IEEE CNS 2019, and the 2016 Alcatel-Lucent Bell Labs UK and Ireland Recognition Award. He served as a guest editor for IEEE JSAC and was on the program committees of many conferences including INFOCOM, MOBIHOC, ICDCS, and IWQoS.
Invited Talk 17: “Artificial tactile sensing system mimicking human tactile cognition”
Prof. Ji-Woong Choi, DGIST, Korea
Abtract:
With an upcoming era of the metaverse, the importance of computer understanding human sensibilities becomes one of the most promising topics nowadays. Digital experiences, which offer vicarious sensory experiences without actual contact, can be widely applied in various fields such as entertainment and on-line marketing. For a more immersive digital experience, textile sense is an inevitable component along with visual and auditory sensations. In this talk, I will introduce an artificial tactile perception and cognition system, named “Tactile Avatar”, producing smooth/soft and rough tactile sensations. A piezoelectric tactile sensor is developed to record dynamically various physical information such as pressure, temperature, hardness, sliding velocity, and surface topography. For artificial tactile cognition, the tactile feeling of humans to various tactile materials ranging from smooth/soft to rough are assessed and found variation among participants. Because tactile responses vary among humans, a deep learning structure is designed to allow personalization through training based on individualized histograms of human tactile cognition and recording physical tactile information. This approach can be applied to electronic devices with tactile emotional exchange capabilities as well as advanced digital experiences.
Biography:
- Ji-Woong Choi received the B.S., M.S., and Ph.D. degrees from Seoul National University (SNU), Seoul, South, Korea, in 1998, 2000, and 2004, respectively, all in Electrical Engineering. From 2005 to 2007, he was a Postdoctoral Visiting Scholar with the Department of Electrical Engineering, Stanford University, Stanford, CA, USA. From 2007 to 2010, he was with Marvell Semiconductor, Santa Clara, CA, USA, as a Staff Systems Engineer for next-generation wireless communication systems, including WiMAX and LTE. Since 2010, he has been with the Information and Communication Engineering Department, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea, as a Full Professor, and also working as Director of Brain Engineering Convergence Research Center, DGIST. His research interests include communication theory and signal processing, and related applications such as vehicular communications, biomedical signal processing/machine learning applications, brain–machine/computer interface (BMI/BCI), and near-field wireless power transfer. He is Editor of Journal of Communications and Networks (JCN) and IEEE Transactions on Molecular, Biological, and Multi-Scale Communications (TMBMC).
Invited Talk 18: “The participatory AR platform and applications”
Dr. Sung-Uk Jung, Principal Researcher, ETRI, Korea
Abtract:
My research strives to develop the practical AR core technologies which can be suitable for mobile devices and utilized in real situation by developing the AR cloud framework for multi-users, mobile SLAM for estimating the device’s position and mobile skeleton extraction for real-virtual object interaction. In this talk, I will outline a participatory AR platform which is the government funded research project in Korea. I will also show the AR platform applications which have commercially applied for public spaces such as AR musical, miniature AR, outdoor AR service and so on.
Biography:
- Sung-Uk Jung received the B.Sc. degree in electrical engineering from Korea University in 2003, the M.Sc. degree in electrical engineering and computer science from Korea Advanced Institute of Science and Technology (KAIST), Korea, in 2005, and the Ph.D. degree in electronics and computer science from the University of Southampton, U.K., in 2012. Since August 2005, he has been with Electronics and Telecommunications Research Institute (ETRI), Korea, and is currently with the Content Research Division of ETRI. His current research interests include computer vision, human motion analysis, augmented reality, and human computer interaction.