人工智能司法的可解释性困境及其纾解

The Interpretability Dilemma and Resolution of Artificial Intelligence Judicature

传播影响力
本库下载频次:
本库浏览频次:
CNKI下载频次:0

归属学者:

张晓君

归属院系:

国际法学院

作者:

周媛 ; 张晓君2

摘要:

加强对人工智能司法发展及风险的研究是时代课题,其中人工智能司法的可解释性困境尤为关键。人工智能司法可解释性指的是司法决策或行为的可理解与透明性,涉及基础数据、目标任务、算法模型以及人的认知这四类关键要素。不可解释困境主要是由数据失效、算法黑箱、智能技术局限、决策程序和价值缺失等因素所致。但是,人工智能司法的不可解释困境其实是一个伪命题,可解释性具备认知层面和制度层面两方面基础。纾解困境的具体策略包括:构建司法信息公开共享制度,提高有用数据的甄别与利用效率;从软硬法结合视角建构司法系统的运行标准与制度规则;从全过程视角强化主体之间的协同治理;通过指导性案例和司法解释赋权法官的司法解释空间,提高法律解释技术;强化交叉学科人才建设,提高对人工智能司法决策模型的引领;发挥法官的自律与能动性,实现司法智能决策的人机协同。未来,不仅需要把握司法价值与技术理性的平衡,还需考虑人工智能对司法的差异化介入,推动人工智能司法战略目标实现。

语种:

中文

出版日期:

2023-03-15

学科:

法学; 控制科学与工程

收录:

CSSCI-E

提交日期

2023-03-27

引用参考

周媛;张晓君. 人工智能司法的可解释性困境及其纾解[J]. 财经法学,2023(02):3-20.

全文附件授权许可

知识共享许可协议-署名

  • dc.title
  • 人工智能司法的可解释性困境及其纾解
  • dc.contributor.author
  • 周媛;张晓君
  • dc.contributor.author
  • Zhou Yuan;Zhang Xiaojun
  • dc.contributor.affiliation
  • 上海交通大学凯原法学院;西南政法大学国际法学院
  • dc.publisher
  • 财经法学
  • dc.publisher
  • Law and Economy
  • dc.identifier.year
  • 2023
  • dc.identifier.issue
  • 02
  • dc.identifier.volume
  • No.50
  • dc.identifier.page
  • 3-20
  • dc.date.issued
  • 2023-03-15
  • dc.language.iso
  • 中文
  • dc.subject
  • 人工智能;司法;算法;可解释性;协同治理
  • dc.subject
  • artificial intelligence;justice;algorithm;explainability;collaborative governance
  • dc.description.abstract
  • 加强对人工智能司法发展及风险的研究是时代课题,其中人工智能司法的可解释性困境尤为关键。人工智能司法可解释性指的是司法决策或行为的可理解与透明性,涉及基础数据、目标任务、算法模型以及人的认知这四类关键要素。不可解释困境主要是由数据失效、算法黑箱、智能技术局限、决策程序和价值缺失等因素所致。但是,人工智能司法的不可解释困境其实是一个伪命题,可解释性具备认知层面和制度层面两方面基础。纾解困境的具体策略包括:构建司法信息公开共享制度,提高有用数据的甄别与利用效率;从软硬法结合视角建构司法系统的运行标准与制度规则;从全过程视角强化主体之间的协同治理;通过指导性案例和司法解释赋权法官的司法解释空间,提高法律解释技术;强化交叉学科人才建设,提高对人工智能司法决策模型的引领;发挥法官的自律与能动性,实现司法智能决策的人机协同。未来,不仅需要把握司法价值与技术理性的平衡,还需考虑人工智能对司法的差异化介入,推动人工智能司法战略目标实现。
  • dc.description.abstract
  • Strengthening the research on the development and risk of artificial intelligence justice is a topic of the times, in which the interpretability dilemma of artificial intelligence justice is particularly critical. AI judicial interpretability refers to the comprehensibility and transparency of judicial decisions or behaviors, involving four key elements: basic data, target tasks, algorithm models and human cognition. Unexplainable dilemma is mainly caused by factors such as data failure, algorithm black box, limitations of intelligent technology, decision-making procedures and lack of value. However, the unexplained paradox of AI justice is actually a false proposition, which has two aspects of cognitive and institutional basis. The specific strategies to relieve the dilemma include building a judicial information public sharing system and improving the screening and utilization efficiency of useful data, constructing the operation standards and system rules of the judicial systems from the perspective of the combination of soft and hard laws, strengthening the collaborative governance between the subjects from the perspective of the whole process, empowering judges with judicial interpretation space and improving legal interpretation technology by guiding cases and judicial interpretation, strengthening the construction of interdisciplinary talents and improving the guidance of AI judicial decision-making model, giving judges scope for their self-discipline and initiative, and realizing the human-computer coordination of judicial intelligent decision-making. In the future, it is not only necessary to grasp the balance between judicial value and technical rationality, but also need to consider the differential intervention of AI in justice, so as to promote the realization of AI judicial strategic objectives.
  • dc.description.sponsorshipPCode
  • 21BFX136
  • dc.description.sponsorship
  • 国家社会科学基金项目“城市更新中促进绿色建筑发展法律机制研究”(21BFX136)的阶段性成果
  • dc.description.sponsorshipsource
  • 国家社会科学基金
  • dc.identifier.CN
  • 10-1281/D
  • dc.identifier.issn
  • 2095-9206
  • dc.identifier.if
  • 2.373
  • dc.subject.discipline
  • D926.2;TP18
回到顶部