Wednesday Jan 24, 2024

AI & Data Law: Large Language Models in Generative AI

Bill Tanenbaum and Amir Ghavi of Fried Frank discuss the different types of open and vendor-provided  Large Language Models (LLMs) and how they work, and what “fine-tuning” a model means. AI models can be viewed as the inverse of software. Software starts with rules, applies rules to data, and that generates output. AI starts with data, applies an algorithm to the data, and that generates rules. To conduct fine-tuning, a company starts with a pre-trained LLM and adds data that is specifically related to a desired set of corporate tasks to generate tailored rules. Along with other forward-looking issues, Bill and Amir address why fine-tuning is the future of corporate use of Generative AI, why hallucinations will become less problematic, and the contract terms and other factors that companies and their counsel should consider in selecting the pre-trained LLM. 

AI & Data Law, part of the PLI Ever Current podcast, brings you conversations with thought leaders at the intersection of AI & data law. PLI is proud to keep you ever current with timely programs, publications, and podcasts. Visit http://pli.edu/aipod  to learn more about our AI resources. 

Please note: CLE is not offered for listening to this podcast, and the views and opinions expressed within represent those of the speakers and host, and not necessarily those of PLI.

Recorded on 11/15/23

Copyright 2023 All rights reserved.

Version: 20240731