finos / devops-automation

Provide a continuous compliance and assurance approach to DevOps that mutually benefits banks, auditors and regulators whilst accelerating DevOps adoption in engineering and fintech IT departments.

Home Page:http://devops.finos.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How are FINOS members approaching adoption of AI Code Assistants?

ashukla13 opened this issue · comments

AI Code Adoption Approach at Banks
There is a lot of interest in adoption of AI Code Assistants like GitHub CoPilot.

How are FINOS member organizations approaching adoption of these tools.

In particular, how are they thinking about the following when it comes to AI Code assistants:

  • Risk considerations and guardrails
  • SDLC controls
  • Integrations and rollout approach (incl. use cases to prioritize)
  • Measuring benefits

@cnygardtw will you be able to provide a view on what ThoughtWorks is seeing across regulated organizations re. adoption of AI Code Assistants like GitHub CoPilot

@cnygardtw will you be able to provide a view on what ThoughtWorks is seeing across regulated organizations re. adoption of AI Code Assistants like GitHub CoPilot

Sure, I'll be able to provide some background on our experiences.

Minutes from discussions from DevOps SIG SMEs form March SIG meeting (#182 ):

  • These tools are code assistants. They don't replace the developer. Developers are still responsible for the generated code, including reviewing suggestions from AI code assistants for accuracy and conformance with existing policies - no different than code they wouldwrite without the use of such tools.
  • All code generated by such tools should be subject to existing SDLC controls and processes; including multiple human in loop controls - for requirements, code review (4 eyes check), code scanning (including for security vulnerabilities), and testing
  • Consider from a risk perspective:
    • information security (incl. data flow)
    • legal
    • privacy
    • model governance
    • misuse/abuse of chat features beyond technical questions
  • Other things to consider in discussions with AI code assistant vendors - esp., from a legal and contractual perspective:
    • Input (prompt/context) retention or use for training models (beyond custom models that you specifically want to train)
    • Code suggestions violating license/copyright agreements; some vendors offer indemnification and/or filtering of certain types of output
  • Would help vendors shared more information on how they implement some of the risk mitigations - risk groups may want this level of detail_
    • Consider having a session with AI code assistant tool vendors so they can explain how they mitigate the risks outlined above.
  • Did not get to discussion on metrics to consider to track benefits and also the potential issues (e.g., defects); this will be discussed in a separate meeting (likely April)

Minutes from discussions from DevOps SIG SMEs form March SIG meeting (#182 ):

  • Such tools are code assistants. They don't replace the developer. Developers are responsible for the generated code, including reviewing suggestions from AI code assistants for accuracy and SDLC policy conformance - no different than code they write without the use of AI code assistants

  • All code generated by such tools will be subject to current SDLC controls and processes; including multiple human in loop controls - for requirements, code review (4 eyes check), code scanning (including for security vulnerabilities), and testing

  • Consider from a risk perspective:

    • data flow/information security requirements
    • legal
    • privacy
    • model governance
    • misuse/abuse of chat features beyond technical questions
  • Other things to consider in discussions with AI code assistant vendors - esp., from a legal and contractual perspective:

    • Input (prompt/context) retention or use for training models (beyond custom models that you specifically want to train)
    • Code suggestions violating license/copyright agreements; some vendors offer indemnification and/or filtering of certain types of output
      - Would help vendors shared more information on how they implement some of the risk mitigations - risk groups may want this level of detail
    • Consider having a session with AI code assistant tool vendors so they can explain how they mitigate the risks outlined above.
  • Did not get to discussion on metrics to consider to track benefits and also the potential issues (e.g., defects); this will be discussed in a separate meeting (likely April)

Code Generation Indemnity Further Reading referenced on today's call

Existing policies:
https://blogs.microsoft.com/on-the-issues/2023/09/07/copilot-copyright-commitment-ai-legal-concerns/
https://cloud.google.com/blog/products/ai-machine-learning/protecting-customers-with-generative-ai-indemnification

Reporting coverage:
https://www.runtime.news/ai-vendors-promised-indemnification-against-copyright-lawsuits-the-details-are-messy/
https://learn.g2.com/ai-code-generators-legal-considerations