First Guidelines For The Use Of Artificial Intelligence In Arbitration

M
MME

Contributor

The Silicon Valley Arbitration & Mediation Center (SVAMC) has published "Guidelines on the Use of Artificial Intelligence in Arbitration" to ensure the ethical and effective use of AI in arbitral proceedings.
Switzerland Litigation, Mediation & Arbitration
To print this article, all you need is to be registered or login on Mondaq.com.

The Silicon Valley Arbitration & Mediation Center (SVAMC) has published "Guidelines on the Use of Artificial Intelligence in Arbitration" to ensure the ethical and effective use of AI in arbitral proceedings. Here is what you need to know about the first AI Guidelines for arbitration.

In a world that is increasingly characterized by technological advances, artificial intelligence (AI) has become an integral part of many areas of life. Arbitration is not immune to this. To ensure the ethical and effective use of AI in arbitration proceedings, the Silicon Valley Arbitration & Mediation Center (SVAMC) has published "Guidelines on the Use of Artificial Intelligence in Arbitration" ("AI Guidelines"). These AI Guidelines provide an initial framework for the responsible use of AI tools in arbitration and mark an important step forward in this area.


AI Guidelines

Introduction

The AI Guidelines, which were published in April 2024 following a public consultation period, aim to support arbitral institutions, arbitrators, parties and their representatives in the use of AI tools. They apply as soon as the parties agree or when an arbitral tribunal or an arbitral institution so decides. These AI Guidelines define AI as systems that perform tasks typically associated with human cognition and emphasize that all existing legal and ethical obligations continue to apply.

The AI Guidelines include (i) General Guidelines for All Participants, (ii) Guidelines for Parties and Party Representatives, and (iii) Guidelines for Arbitrators.

General Guidelines for All Participants

1. Understanding the uses, limitations, and risks of AI applications: All participants in arbitration proceedings must familiarize themselves with the intended use and limitations of the applied AI tools and use them appropriately. They should inform themselves about possible errors (hallucinations) and risks and try to mitigate them.

2. Safeguarding confidentiality: Participants must ensure that the use of AI tools is compatible with their confidentiality obligations. Confidential information should not be entered into AI tools without appropriate authorization. Only AI tools that adequately protect confidentiality should be used.

3. Disclosure: Disclosure that AI tools were used in connection with an arbitration is not generally required. Decisions on disclosure should be made on a case-by-case basis, taking into account the relevant circumstances. Where appropriate, details should be provided in order to reproduce and evaluate the AI output.

Guidelines for Parties and Party Representatives

1. Duty of competence or diligence in the use of AI: Party representatives must comply with ethical rules or professional standard of competent or diligent representation when using AI tools in the context of an arbitration. They are responsible for reviewing the output generated by AI tools.

2. Respect for the integrity of the proceedings and the evidence: Parties and their representatives shall not use AI to compromise the integrity of the arbitration or falsify evidence. The use of AI to mislead the arbitral tribunal is prohibited.

Guidelines for Arbitrators

1. No delegation of decision-making responsibilities: Arbitrators may not delegate any part of their mandate to AI tools. This includes in particular their decision-making responsibility. The use of AI must not replace the arbitrator's independent analysis of the facts, law and evidence.

2. Respect for due process: arbitrators shall not rely on AI-generated information outside the record without prior disclosure and opportunity for comment. If an AI tool cannot provide sources that can be independently verified, an arbitrator may not assume that such sources exist or are accurately characterized by the AI tool.

Model Clause for PO1

The AI Guidelines contain a model clause that can be included in the PO1. This model clause is intended to facilitate the integration of the AI Guidelines into the international arbitration practice.

Final Remarks

The SVAMC AI Guidelines are a dynamic document that is regularly updated to reflect the latest technological developments. They provide a valuable framework for the ethical and effective use of AI in arbitration and contribute to the development of best practices. This proactive approach not only preserves the integrity of arbitration proceedings, but also increases confidence in the use of modern technologies.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More