Session Keynote 3

Keynote 3: Network Configuration for High-performance Distributed Machine Learning

Conference
9:00 AM — 10:00 AM CEST
Local
Jun 11 Sat, 3:00 AM — 4:00 AM EDT

Network Configuration for High-performance Distributed Machine Learning

Carla Fabiana Chiasserini, IEEE Fellow, Politecnico di Torino, Italy

2
This talk does not have an abstract.

Session Chair

Jiangchuan Liu, Simon Fraser University, Canada

Enter Zoom
Session Keynote 4

Keynote 4: From a need, to an idea, to a complete system: a perspective based on real-world applications

Conference
10:00 AM — 11:00 AM CEST
Local
Jun 11 Sat, 4:00 AM — 5:00 AM EDT

From a need, to an idea, to a complete system: a perspective based on real-world applications

Pål Halvorsen, SimulaMet; and Oslo Metropolitan University, Norway

2
This talk does not have an abstract.

Session Chair

Dan Wang, The Hong Kong Polytechnic University, China

Enter Zoom
Session Session 5

Privacy

Conference
11:00 AM — 12:10 PM CEST
Local
Jun 11 Sat, 5:00 AM — 6:10 AM EDT

Privacy-Preserving and Robust Federated Deep Metric Learning

Yulong Tian and Xiaopeng Ke (Nanjing University, China); Zeyi Tao (William and Mary, USA); Shaohua Ding and Fengyuan Xu (Nanjing University, China); Qun Li (William & Mary, USA); Hao Han (Nanjing University of Aeronautics and Astronautics, China); Sheng Zhong (Nanjing University, China); Xinyi Fu (Ant Group, China)

1
Federated learning, in contrast to traditional learning paradigms, has demonstrated its unique advantages in providing intelligence at the edge. However, existing federated learning approaches focus on the end-to-end classification tasks requiring a simple vertically-federated procedure. Unfortunately, there are still many tasks relying on learning the distinguishable feature metric with respect to all the data, which requires a horizontally-federated procedure across training participants. For example, the user authentication using the touch-screen data has to ensure features representing a person's touch-screen data are dissimilar to those of others' data. To enable such federated learning for deep metrics (a.k.a federated deep metric learning) is challenging because of data privacy and procedure robustness. With consideration of these two challenges, this work proposes a novel computing framework for the federated deep metric learning. This framework leverages a system-algorithm co-design to address the privacy concern via the TEE (SGX enclave) and Differential Privacy mechanisms, and it designs a large-scale federated protocol robustly and efficiently dealing with several practical factors like network fluctuation. We implement and evaluate our computing framework with two settings. One is a large-scale commercial deployment inside one of the world's biggest mobile payment companies with the large scale number of smartphones of volunteer employees as the participants of federated learning, while the other one is for deep metric learning tasks in our controllable environment to conduct experiments in various cases. Our evaluation results show that our computing framework can efficiently train federated deep metric learning models, while providing considerable accuracy in exception scenarios, with large participants under the protection of data privacy.

PPAR: A Privacy-Preserving Adaptive Ranking Algorithm for Multi-Armed-Bandit Crowdsourcing

Shuzhen Chen and Dongxiao Yu (Shandong University, China); Feng Li (Shandong Universiy, China); Zongrui Zou (Shandong University, China); Weifa Liang (City University of Hong Kong, Hong Kong); Xiuzhen Cheng (Shandong University, China)

2
This paper studies the privacy-preserving adaptive ranking problem for multi-armed-bandit crowdsourcing, where according to the crowdsourced data, the arms are required to be ranked with a tunable granularity by the untrustworthy third-party platform. Any online worker can provide its data by arm pulls but requiring its privacy preserved, which will clearly increase the ranking cost greatly. To improve the quality of the ranking service, we propose a Privacy-Preserving Adaptive Ranking algorithm called PPAR. PPAR can solve the problem with a high probability and differential privacy can be ensured meanwhile. The total cost of the proposed algorithm is O(K ln K), which is nearly optimal compared with the trivial lower bound Ω(K), where K is the number of arms. Our proposed algorithm can also be used to solve the well-studied fully ranking problem and the best arm identification problem, by properly setting the granularity parameter. For the fully ranking problem, PPAR attains the same order of cost with the best-known results without privacy preservation. The efficacy of our algorithm is also verified by extensive experiments on public datasets.

A Privacy-aware Distributed Knowledge Graph Approach to QoIS-driven COVID-19 Misinformation Detection

Lanyu Shang and Ziyi Kou (University of Illinois at Urbana-Champaign, USA); Yang Zhang (University of Notre Dame, USA); Jin Chen (Cleveland Clinic, USA); Dong Wang (University of Illinois at Urbana-Champaign, USA)

1
In this paper, we focus on the quality of information services (QoIS) about COVID-19-related information on social media. Our goal is to provide reliable COVID-19 information by accurately detecting the misleading COVID-19 social media posts with the community-contributed COVID-19 fact data (CCFD) from different social media platforms. In particular, CCFD refers to the fact-checking reports that are submitted to each social media platform by its users and fact-checking professionals. Our work is motivated by the observation that CCFD often contains useful COVID-19 knowledge facts (e.g., "COVID-19 is not a flu") that can effectively facilitate the identification of misleading COVID-19 social media posts. However, CCFD is often private to the individual social media platform that owns it due to the data privacy concerns such as data copyright of CCFD and the user profile information of CCFD contributors. In this paper, we leverage the CCFD from different social media platforms to accurately detect COVID-19 misinformation and while effectively protecting the privacy of CCFD. Two critical challenges exist in solving our problem: 1) how to generate privacy-aware COVID-19 knowledge facts from the platform-specific CCFD? 2) How to effectively integrate the privacy-aware COVID-19 knowledge facts from different social media platforms to correctly assess the truthfulness of a social media post? To address these challenges, we develop CoviDKG, a COVID-19 distributed knowledge graph framework that constructs a CCFD-based knowledge graph on each social media platform and exchanges the privacy-preserved COVID-19 knowledge facts across different platforms to effectively detect misleading COVID-19 posts. We evaluate CoviDKG with two real-world social media datasets from Twitter and Facebook. Evaluation results show that CoviDKG achieves significant performance gains compared to state-of-the-art baselines in accurately detecting misleading COVID-19 posts on social media.

Nearly Optimal Protocols for Computing Multi-party Private Set Union

Xuhui Gong, Qiang-Sheng Hua and Hai Jin (Huazhong University of Science and Technology, China)

1
Private Set Operations (PSO) are a hot research topic and one of the most extensive research problems in data mining. In the PSO, Private Set Union (PSU) is one of the fundamental problems. It allows some participants to learn the union of their data sets without leaking any useful information. However, most of existing works have high communication, computation and round complexities. In this paper, we first propose a novel and efficient protocol to securely compute Multi-party PSU (MPSU) under the semi-honest model. In our system model, there are $n$ participants where each participant has a set of size k (k could be different among participants) from a certain integer domain [1, M]. There are also up to t (0 <= t< n) participants which could collude with each other. We assume all communication channels among participants are insecure and can easily suffer from eavesdropping attacks. Our first protocol using OR perturbation encryption scheme, i.e., EXP-MPSU, only requires O(1) rounds and has O(nN\lambda) communication complexity which almost matches the communication lower bound \Omega(nN/\log n) for the MPSU problem, where \lambda is a security parameter and N (k<=N<=nk) is the set union cardinality. In addition, we note that for the two-party case, i.e., n=2, our EXP-MPSU protocol has the same complexities as the state-of-the-art work in \cite{davidson2017efficient}.

For this special case (two-party), we further optimize and design a more efficient protocol using OT protocol, i.e., OT-PSU. It only requires O(1) rounds and O(k\lambda) communication complexity which almost matches the communication lower bound \Omega(k). More importantly, it avoids using computationally expensive public key operations (exponentiations). In other words, the number of exponentiations in this protocol is independent of the size of the data sets.
Compared with the existing protocols, our two protocols have the lowest communication, computation and round complexities.

Session Chair

Yifei Zhu, Shanghai Jiao Tong University

Enter Zoom
Session Session 6

Mobile and Wireless Networks

Conference
1:00 PM — 2:10 PM CEST
Local
Jun 11 Sat, 7:00 AM — 8:10 AM EDT

An Experimental Study of Triggered Multi-User Uplink Access with Real Application Traffic

Vinicius Da Silva Goncalves and Edward W. Knightly (Rice University, USA)

2
The 802.11ax amendment introduced Triggered Uplink Access (TUA) to Wi-Fi to support uplink Multi-User (MU) MIMO. TUA coordinates simultaneous transmission of uplink users via an AP-transmitted trigger that gives an AP-selected group of users permission to transmit simultaneously for an AP-selected duration of time. Thus, TUA promises performance gains by enabling multi-user transmission and reducing contention overhead for access. In this paper, for the first time, we experimentally study the role of real application traffic on the performance of TUA. In particular, while TUA gains for fully backlogged traffic are well established, we show that bursty closed-loop traffic radically transforms performance. Using a real-time emulator, we experimentally evaluate the empirical limits of triggered uplink multi-user access with traffic from a real file transfer application and different uplink triggering strategies. Our results show that TUA significantly reduces file transfer latency compared to legacy single-user uplink, but unfortunately the standardized method for low-overhead backlog reporting leaves substantial benefits unrealized. Moreover, we show that unlike a single-user uplink, TUA has non-monotonic performance with respect to the frame aggregation limit.

Bandwidth Prediction for 5G Cellular Networks

Yuxiang Lin, Wei Dong and Yi Gao (Zhejiang University, China)

3
Effective bandwidth prediction in the fifth-generation (5G) cellular networks is essential for bandwidth-consuming applications, such as virtual reality and holographic video streaming. However, accurate bandwidth prediction in 5G networks remains a challenging task due to the short-distance coverage and frequent handover properties of 5G base stations. In this paper, we propose HYPER, a hybrid bandwidth prediction approach using commercial smartphones. Hyper uses an AutoRegressive Moving Average (ARMA) time series predictive model for intra-cell bandwidth prediction and a Random Forest (RF) regression model for cross-cell bandwidth prediction. Our ARMA model takes prior bandwidth usage as its input, while the RF model further uses related network and physical features to predict future bandwidth. We conduct a measurement study in commercial 5G networks to analyze the relationship between these features and bandwidth. Moreover, we also propose a handover window adaptation algorithm to automatically adjust the handover window size and determine which model to use during handover. We use commercial 5G smartphones for data collection and conduct extensive experiments in diverse urban environments. Experimental results based on one TB of cellular data show that HYPER can reduce the bandwidth prediction error by more than 13% compared to state-of-the-art bandwidth prediction approaches.

Investigating the Predictability of QoS Metrics in Cellular Networks

Stefan Herrnleben, Johannes Grohmann, Veronika Lesch, Thomas Prantl, Florian Metzger, Tobias Ho?feld and Samuel Kounev (University of Wuerzburg, Germany)

3
Applications on mobile devices face varying network conditions in cellular networks. The connected radio cell is often changing, especially with moving devices. Different access technologies, varying signal strengths, or distance to the connected radio tower influence the Quality of Service (QoS) of mobile applications. Existing technologies like buffering or adaptive video streaming work reactive, i.e., they react to a decreasing download bitrate. In contrast, these technologies and mobile applications in general could benefit from early knowledge of the expected connection quality.

This work investigates the predictability of QoS metrics in cellular networks based on the experience of previous measurements. For this, we developed an Android app to measure download bitrates with minimal data consumption. We performed over 90000 measurements using a single network operator and analyzed how precise QoS indicators like packet round trip times and download bitrates can be predicted. We developed a methodology to predict the expected download bitrate along a route and present our approach of aggregating measurements into hexagons of dynamic size. The core contributions of this work are (i) a methodology and implementation of systematic measurement data collection, (ii) an open data publication of our measurement data set, and (iii) an approach for predicting QoS metrics in cellular networks based on aggregated measurements. Our results show, that our approach is able to predict the downlink bitrate, the packet round trip time (ping), or DNS query duration along a given route.

Timely-throughput Optimal Scheduling for Wireless Flows with Deep Reinforcement Learning

Qi Wang and ChenTao He (Institute of Computing Technology, Chinese Academy of Sciences, China); Katia Jaffrès-Runser (University of Toulouse - Toulouse INP & IRIT Laboratory, France); Jianhui Huang and Yongjun Xu (Institute of Computing Technology, Chinese Academy of Sciences, China)

2
This paper addresses the problem of scheduling real-time wireless flows under dynamic network conditions and general traffic patterns. The objective is to maximize the fraction of packets of each flow to be delivered within their deadlines, referred to as timely-throughput. The scheduling problem under restrictive frame-based traffic models or greedy maximal scheduling schemes like LDF has been extensively studied so far, but scheduling algorithms to provide deadline guarantees on packet delivery for general traffic under dynamic network conditions are very limited. We propose two scheduling algorithms using deep reinforcement learning approach to optimize timely-throughput for general traffic in dynamic wireless networks: RL-Centralized scheduling algorithm and RL-Decentralized scheduling algorithm. Specifically, we formulate the centralized scheduling problem as an Markov Decision Process (MDP) and a multi-environments double deep Q-network (ME-DDQN) structure is proposed to adapt to the dynamic network conditions. The decentralized scheduling problem is formulated as a Partially Observable Markov Decision Process (POMDP) and an expert-apprentice centralized training and decentralized execution (EA-CTDE) structure is designed to accelerate the training speed and achieve the optimal timely-throughput. The extensive results show that the proposed scheduling algorithms converge fast and adapt well to network dynamics with superior performance compared to baseline policies. Finally, experimental tests confirm simulation results and also show that the proposed algorithms are feasible in practice on resource limited platforms.

Session Chair

Yong Cui, Tsinghua University, China

Enter Zoom
Session Session 7

Edge Computing

Conference
2:10 PM — 3:20 PM CEST
Local
Jun 11 Sat, 8:10 AM — 9:20 AM EDT

When Multi-access Edge Computing Meets Multi-area Intelligent Reflecting Surface: A Multi-agent Reinforcement Learning Approach

Shen Zhuang and Ying He (Shenzhen University, China); F. Richard Yu (Carleton University, Canada); Chengxi Gao (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China); Weike Pan and Zhong Ming (Shenzhen University, China)

1
In recent years, multi-access edge computing (MEC) is emerging to provide computation and storage capabilities to the Internet of things (IoT) devices to improve the quality of service (QoS) of IoT applications. In addition, intelligent reflecting surface (IRS) techniques have attracted great interests from both academia and industry to improve the communication efficiency. Although existing works leverage the IRS technique in MEC networks, they mainly focus on the single-IRS single-area scenario. However, in practice, multi-IRS will be deployed in multi-area scenarios in future networks. Consequently, considering the single-IRS single-area scenario will have inferior performance. In this paper, to address the aforementioned issue, we propose an efficient resource provisioning scheme for multi-IRS multi-area scenarios in MEC networks. We first model the problem as a cooperative multi-agent reinforcement learning process, where each agent manages one area and all agents share the network bandwidth and computation resources. Then, we propose a multi-agent actor-critic method with an attention mechanism for resource management with latency guarantee. Finally, we conduct extensive simulations to verify the effectiveness of the proposed scheme. Our scheme can reduce the required computation resources by up to 11.84\% when compared with the benchmark works. It is also shown that our proposed scheme can improve the efficiency of resource allocation and scale well with the increasing demand from IoT devices.

JCSP: Joint Caching and Service Placement for Edge Computing Systems

Yicheng Gao and Giuliano Casale (Imperial College London, United Kingdom (Great Britain))

1
With constrained resources, what, where, and how to cache at the edge is one of the key challenges for edge computing systems. The cached items include not only the application data contents but also the local caching of edge services that handle incoming requests. However, current systems separate the contents and services without considering the latency interplay of caching and queueing. Therefore, in this paper, we propose a novel class of stochastic models that enable the optimization of content caching and service placement decisions jointly. We first explain how to apply layered queueing networks (LQNs) models for edge service placement and show that combining this with genetic algorithms provides higher accuracy in resource allocation than an established baseline. Next, we extend LQNs with caching components to establish a joint modeling method for content caching and service placement (JCSP) and present analytical methods to analyze the resulting model. Finally, we simulate real-world Azure traces to evaluate the JCSP method and find that JCSP achieves up to 35% improvement in response time and 500MB reduction in memory usage than baseline heuristics for edge caching resource allocation.

Service Placement and User Assignment in Multi-Access Edge Computing with Base-Station Failure

Haruto Taka, Fujun He and Eiji Oki (Kyoto University, Japan)

1
Multi-access edge computing (MEC) enables users to exploit the resources of cloud computing at a base station (BS) in proximity to the users where an MEC server is hosted. While we have advantage of being able to communicate with low latency and small network load in MEC networks, the resources in BSes are limited. One challenge is where to provide users with services from to make efficient use of resources. Furthermore, to enhance the reliability of MEC system, the case that a BS fails needs to be considered. This paper proposes a service placement and user assignment model with preventive start-time optimization against a single BS failure in MEC networks. The proposed model preventively determines the service placement and user assignment in each BS failure pattern to minimize the
worst-case penalty which is the largest penalty among all failure patterns. We formulate the proposed model as an integer linear programming problem. We prove that the considered problem is NP-hard and introduce two algorithms with allocation upgrade and preemption to solve the problem. The results show that the introduced algorithms obtain a solution with the smaller worst-case penalty than the benchmark in a practical time.

Dynamic Pricing Scheme for Edge Computing Services: A Two-layer Reinforcement Learning Approach

Feng Lyu and Xinyao Cai (Central South University, China); Fan Wu (Tsinghua University, China); Huali Lu and Sijing Duan (Central South University, China); Ju Ren (Tsinghua University, China)

1
This talk does not have an abstract.

Session Chair

Nakjung Choi, Nokia Bell Labs

Enter Zoom
Session Session 8

Traffic Analysis

Conference
3:30 PM — 4:40 PM CEST
Local
Jun 11 Sat, 9:30 AM — 10:40 AM EDT

AdvTraffic: Obfuscating Encrypted Traffic with Adversarial Examples

Hao Liu and Jimmy Dani (University of Cincinnati, USA); Hongkai Yu (Cleveland State University, USA); Wenhai Sun (Purdue University, USA); Boyang Wang (University of Cincinnati, USA)

3
This talk does not have an abstract.

Flow Sequence-Based Anonymity Network Traffic Identification with Residual Graph Convolutional Networks

Ruijie Zhao and Xianwen Deng (Shanghai Jiao Tong University, China); Yanhao Wang (QI-ANXIN Technology Research Institute, China); Libo Chen, Ming Liu, Zhi Xue and Yijun Wang (Shanghai Jiao Tong University, China)

3
Identifying anonymity services from network traffic is a crucial task for network management and security. Currently, some works based on deep learning have achieved excellent performance for traffic analysis, especially those based on flow sequence (FS), which utilities information and features of the traffic flow. However, these models still face a serious challenge because of lacking a mechanism to take into account relationships between flows, resulting in mistakenly recognizing irrelevant flows in FS as clues for identifying traffic. In this paper, we propose a novel FS-based anonymity network traffic identification framework to tackle this problem, which leverages Residual Graph Convolutional Network (ResGCN) to exploit relationships between flows for FS feature extraction. Moreover, we design a practical scheme to preprocess the raw data of real-world traffic, which further improves identification performance and efficiency. Experimental results on two real-world traffic datasets demonstrate that our method outperforms state-of-the-art methods with a large margin. Our codes and dataset will be available at github after the double-blind review process.

APS: Adaptive Packet Sizing for Efficient End-to-End Network Transmission

Feixue Han (Tsinghua University, China); Qing Li (Peng Cheng Laboratory, China); Jianer Zhou (SUSTech, China); Hong Xu (The Chinese University of Hong Kong, Hong Kong); Yong Jiang (Graduate School at Shenzhen, Tsinghua University, China)

2
Practitioners have made a great effort to improve the performance of data transmission. However, they rarely concern about the impact of packet size limited by the 1500-byte maximum transmission unit (MTU), which brings the considerable overhead of the explosive volume of packets. Hence, some researchers prompt jumbo frames to overcome the intrinsic drawbacks of the small MTU. Through our comprehensive experiments, we find that jumbo frames cannot always yield the best performance under different transmission situations. In this paper, we elaborate on the limitations of the standard and jumbo frames, and analyze how packet sizes affect network performance. Based on these, we present Adaptively Packet Sizing (APS), a dynamic packet size adjustment method that can be easily integrated into existing window-based congestion control (CC) algorithms. APS utilizes a machine learning method to predict the optimal packet size (ops), which can provide minimize FCT according to the real-time network status. Besides, a packet size based priority mechanism is proposed to further improve the performance of APS. We implement APS in both simulation and testbed environments. APS reduces the FCT by up to 50% and gains better performance in scenarios with various loss rates.

iSwift: Fast and Accurate Impact Identification for Large-scale CDNs

Jiyan Sun (Institute of Information Engineering, Chinese Academy of Sciences, China); Tao Lin (Communication University of China, China); Yinlong Liu (Institute of Information Engineering, Chinese Academy of Sciences & School of Cyber Security, University of Chinese Academy of Sciences, China); Xin Wang (Stony Brook University, USA); Bo Jiang (Shanghai Jiao Tong University, China); Liru Geng (Institute of Information Engineering Chinese Academy of Sciences, China); Pengkun Jing and Liang Dai (Institute of Information Engineering, Chinese Academy of Sciences, China)

1
One key challenge to maintain a large-scale Content Delivery Network (CDN) is to minimize the service downtime when severe system problems happen (e.g., hardware failures). In this case, a critical step is to quickly and accurately identify the range of users with performance degradation, termed impact identification. Successful impact identification not only helps identify impacted users but also provides meaningful information for troubleshooting. However, current practice of impact identification usually takes network engineers several hours to manually identify impacted users, which may lead to a huge business loss. The main challenges for automatic impact identification in large CDNs include the inaccuracy of underlying anomaly detection, huge search space of impact identification and severe long-tail distribution of user traffic. In this paper we propose iSwift, a system that is specifically designed for impact identification in large-scale CDNs in order to address aforementioned challenges. We evaluate the performance of iSwift on semi-synthetic datasets and the results show that iSwift can achieve a F1-score greater than 0.85 within ten seconds, which significantly outperforms state-of-the-art solutions. Furthermore, iSwift has been deployed in a production CDN around one year as a pilot project and demonstrated its online performance confirmed by the network operators.

Session Chair

Cristina Alcaraz, University of Malaga, Spain

Enter Zoom
Session Session 9

Video Streaming

Conference
4:40 PM — 5:50 PM CEST
Local
Jun 11 Sat, 10:40 AM — 11:50 AM EDT

VPPlus: Exploring the Potentials of Video Processing for Live Video Analytics at the Edge

Junpeng Guo, Shengqing Xia and Chunyi Peng (Purdue University, USA)

2
Edge-assisted video analytics is gaining momentum. In this work, we tackle an important problem to compress video live streamed from the device to the edge without scarifying accuracy and timeliness of its video analytics. We reveal a larger configuration space to tune on-device processing, which was largely outlooked. We further design VPPlus to fulfill the potentials to compress video as much as we can, without scarifying analytical accuracy. VPPlus incorporates offline profiling and online adaptation to generate proper feedback automatically and quickly. We validate the effectiveness and efficiency of VPPlus in four object detection tasks using two popular datasets; VPPlus outperforms the state-of-art approaches in almost all the cases.

Ivory: Learning Network Adaptive Streaming Codes

Salma Emara, Fei Wang, Isidor Kaplan and Baochun Li (University of Toronto, Canada)

1
With the growing interest in web services during the current COVID-19 outbreak, the demand for high-quality low-latency interactive applications has never been more apparent. Yet, packet losses are inevitable over the Internet, since it is based on UDP. In this paper, we propose Ivory, a new real-world system framework designed to support network-adaptive error control in real-time communications, such as VoIP, using a recently proposed low-latency streaming code. We design and implement our prototype over UDP that can correct or retransmit lost packets conditional on network conditions and application requirements.

To maintain the highest quality, Ivory attempts to correct as many lost packets as possible on-the-fly, yet incurring the smallest footprint in terms of coding overhead over the network. To achieve such an objective, Ivory uses a deep reinforcement learning agent to estimate the best coding parameters in real-time based on observed network states and experience learned. It learns offline the best coding parameters to use based on previously observed loss patterns and takes into account the round-trip time observed to decide on the optimum decoding delay for a low-latency application. Our extensive array of experiments shows that Ivory achieves a better trade-off between recovering packets and using lower redundancy than the state-of-the-art network adaptive algorithms.

Choice-supportive bias affects video viewing experience: Subjective experiment and evaluation

Daichi Kominami (Osaka University, Japan)

2
The demand for communication quality of service (QoS) from users and service providers has become even higher. However, the amount of traffic flowing through the Internet is increasing year by year, and it is becoming difficult to operate a system that guarantees a certain level of QoS for users. In recent years, not only QoS, but also the user's own quality of experience (QoE) has become significant. Since QoE is a subjective measure of a user's perception of a service, it is considered to be affected by cognitive biases in human decision making. In this paper, we conduct an experiment focusing on the choice-supportive bias and clarify the effect of this cognitive bias on users during video viewing.

Harmonizing Energy Efficiency and QoE for Brightness Scaling-based Mobile Video Streaming

Chao Qian, Daibo Liu and Hongbo Jiang (Hunan University, China)

1
Brightness scaling (BS) is an emerging and promis-ing technique with outstanding energy efficiency on mobile videostreaming. However, existing BS-based approaches totally neglectthe inherent interaction effect between BS factor, video bitrateand environment context, and their combined impact on user'svisual perception in mobile scenario, leading to inharmoniousbetween energy consumption and user's quality of experience(QoE). In this paper, we proposePEO, a novel user-Perception-based videoExperienceOptimization for energy-constrainedmobile video streaming, by jointly considering the inherentconnection between device's state of motion, BS factor, videobitrate and the resulting user-perceived quality. By capturingthe motion of on-the-run device, PEO can infer the optimalbitrate and BS factor, therefore avoiding bitrate-inefficiency forenergy saving while guaranteeing the user-perceived QoE. Onthat basis, we formulate the device motion-aware and userperception-aware video streaming as an optimization problem.We present an optimal algorithm which can maximize the objectfunction and propose an online bitrate selection algorithm. Ourevaluation (based on trace analysis and user study) shows thatcompared with state-of-the-art techniques, PEO can raise theperceived quality by 23.8%-41.3% and save up to 25.2% energyconsumption

Session Chair

Bo Wang, Tsinghua University

Enter Zoom

Made with in Toronto · IWQoS 2020 · IWQoS 2021 · Privacy Policy · © 2022 Duetone Corp.