P2.T10.21.2. Artificial intelligence risk and governance

David Harper CFA FRM

David Harper CFA FRM
Staff member
Subscriber
Learning objectives: Identify and discuss the categories of potential risks associated with the use of AI by financial firms and describe the risks that are considered under each category. Describe the four core components of AI governance and recommended practices related to each. Explain how issues related to interpretability and discrimination can arise from the use of AI by financial firms. Describe practices financial firms can adopt to mitigate AI risks.

Questions:

21.2.1. According to the Artificial Intelligence/Machine Learning Risk & Security Working Group (AIRS), what are the four core components of artificial intelligence (AI) governance?

a. Prioritization, behaviors, testing, and trust
b. Governance, quality, privacy, and compliance
c. Definitions, inventory, policies, and framework
d. Roles, responsibilities, relationships, and reporting

21.2.3. According to the Artificial Intelligence/Machine Learning Risk & Security Working Group (AIRS), crucial to the risks created by artificial intelligence and machine learning (AI&ML) are the concepts of discrimination and interpretability. Discrimination, which concerns unfairly biased outcomes, seems easier to define than interpretability. Consistent with many modern definitions, AIRS parses interpretability into two aspects, interpretability versus explainability: "Interpretability relates to the ability of humans to gain insight into the inner workings of AI systems, which may be complex and opaque. In a practical sense, the two primary aspects of AI/ML interpretability are directly interpretable system mechanisms and posthoc explanations (explainability) of system mechanisms and predictions."

In regard to the comparison between interpretability (aka, interpretable or symbolic AI, SAI) and explainability (aka, explainable AI, XAI), according to AIRS, each of the following is a true statement EXCEPT which is false?

a. Models with high degrees of interpretability (SAI) are more accurate
b. Linear regression and decision tree model are examples of interpretable (SAI)
c. A random forest has low interpretability and its complexity therefore requires explanation (XAI)
d. Explainable methods (aka, explainability, XAI) provide post-hoc explanation(s) for outputs of black-box models

21.2.3. Which mitigation algorithm intervenes at the point of output and has the following advantages: it does not require access to the training process; it is suitable for run-time environments; and it operates in a black-box approach?

a. Pre-processing (on training data)
b. Within- or in-processing (on learning procedure)
c. Post-processing (on predictions)
d. Supra-processing (on nodes)

Answers:
 
Top