Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

邮发代号 80-970

2019 Impact Factor: 1.275

   优先出版

合作单位

2025年, 第19卷 第3期 出版日期:2025-03-15

选择: 合并摘要 显示/隐藏图片
Robust domain adaptation with noisy and shifted label distribution
Shao-Yuan LI, Shi-Ji ZHAO, Zheng-Tao CAO, Sheng-Jun HUANG, Songcan CHEN
Frontiers of Computer Science. 2025, 19 (3): 193310-.  
https://doi.org/10.1007/s11704-024-3810-0

摘要   HTML   PDF (6077KB)

Unsupervised Domain Adaptation (UDA) intends to achieve excellent results by transferring knowledge from labeled source domains to unlabeled target domains in which the data or label distribution changes. Previous UDA methods have acquired great success when labels in the source domain are pure. However, even the acquisition of scare clean labels in the source domain needs plenty of costs as well. In the presence of label noise in the source domain, the traditional UDA methods will be seriously degraded as they do not deal with the label noise. In this paper, we propose an approach named Robust Self-training with Label Refinement (RSLR) to address the above issue. RSLR adopts the self-training framework by maintaining a Labeling Network (LNet) on the source domain, which is used to provide confident pseudo-labels to target samples, and a Target-specific Network (TNet) trained by using the pseudo-labeled samples. To combat the effect of label noise, LNet progressively distinguishes and refines the mislabeled source samples. In combination with class re-balancing to combat the label distribution shift issue, RSLR achieves effective performance on extensive benchmark datasets.

图表 | 参考文献 | 补充材料 | 相关文章 | 多维度评价
An extension of process calculus for asynchronous communications between agents with epistemic states
Huili XING, Zhaohui ZHU, Jinjin ZHANG
Frontiers of Computer Science. 2025, 19 (3): 193401-.  
https://doi.org/10.1007/s11704-023-3208-4

摘要   HTML   PDF (2198KB)

It plays a central role in intelligent agent systems to model agents’ epistemic states and their changes. Asynchrony plays a key role in distributed systems, in which the messages transmitted may not be received instantly by the agents. To characterize asynchronous communications, Asynchronous Announcement Logic (AAL) has been presented, which focuses on the logic laws of the change of epistemic state after receiving information. However AAL does not involve the interactive behaviours between an agent and its environment. Epistemic interactions can change agents’ epistemic states, while the latter will affect the former. Through enriching the well-known π-calculus by adding the operators for passing basic facts and applying the well-known action model logic to describe agents’ epistemic states, this paper presents the e-calculus to model epistemic interactions between agents with epistemic states. The e-calculus can be adopted to characterize synchronous and asynchronous communications between agents. To capture the asynchrony, a buffer pool is constructed to store the basic facts announced and each agent reads these facts from this buffer pool in some order. Based on the transmission of link names, the e-calculus is able to realize reading from this buffer pool in different orders. This paper gives two examples: one is to read in the order in which the announced basic facts are sent (First-in-first-out, FIFO), and the other is in an arbitrary order.

图表 | 参考文献 | 相关文章 | 多维度评价
Incentive mechanism design via smart contract in blockchain-based edge-assisted crowdsensing
Chenhao YING, Haiming JIN, Jie LI, Xueming SI, Yuan LUO
Frontiers of Computer Science. 2025, 19 (3): 193802-.  
https://doi.org/10.1007/s11704-024-3542-1

摘要   HTML   PDF (10232KB)

Edge-assisted mobile crowdsensing (EMCS) has gained significant attention as a data collection paradigm. However, existing incentive mechanisms in EMCS systems rely on centralized platforms, making them impractical for the decentralized nature of EMCS systems. To address this limitation, we propose CHASER, an incentive mechanism designed for blockchain-based EMCS (BEMCS) systems. In fact, CHASER can attract more participants by satisfying the incentive requirements of budget balance, double-side truthfulness, double-side individual rationality and also high social welfare. Furthermore, the proposed BEMCS system with CHASER in smart contracts guarantees the data confidentiality by utilizing an asymmetric encryption scheme, and the anonymity of participants by applying the zero-knowledge succinct non-interactive argument of knowledge (zk-SNARK). This also restrains the malicious behaviors of participants. Finally, most simulations show that the social welfare of CHASER is increased by approximately 42% when compared with the state-of-the-art approaches. Moreover, CHASER achieves a competitive ratio of approximately 0.8 and high task completion rate of over 0.8 in large-scale systems. These findings highlight the robustness and desirable performance of CHASER as an incentive mechanism within the BEMCS system.

图表 | 参考文献 | 补充材料 | 相关文章 | 多维度评价
Fairness is essential for robustness: fair adversarial training by identifying and augmenting hard examples
Ningping MOU, Xinli YUE, Lingchen ZHAO, Qian WANG
Frontiers of Computer Science. 2025, 19 (3): 193803-.  
https://doi.org/10.1007/s11704-024-3587-1

摘要   HTML   PDF (5214KB)

Adversarial training has been widely considered the most effective defense against adversarial attacks. However, recent studies have demonstrated that a large discrepancy exists in the class-wise robustness of adversarial training, leading to two potential issues: firstly, the overall robustness of a model is compromised due to the weakest class; and secondly, ethical concerns arising from unequal protection and biases, where certain societal demographic groups receive less robustness in defense mechanisms. Despite these issues, solutions to address the discrepancy remain largely underexplored. In this paper, we advance beyond existing methods that focus on class-level solutions. Our investigation reveals that hard examples, identified by higher cross-entropy values, can provide more fine-grained information about the discrepancy. Furthermore, we find that enhancing the diversity of hard examples can effectively reduce the robustness gap between classes. Motivated by these observations, we propose Fair Adversarial Training (FairAT) to mitigate the discrepancy of class-wise robustness. Extensive experiments on various benchmark datasets and adversarial attacks demonstrate that FairAT outperforms state-of-the-art methods in terms of both overall robustness and fairness. For a WRN-28-10 model trained on CIFAR10, FairAT improves the average and worst-class robustness by 2.13% and 4.50%, respectively.

图表 | 参考文献 | 补充材料 | 相关文章 | 多维度评价
4篇文章