Data Science & AI Conference 2025
Presented by Lander Analytics, the Data Science & AI Conference brings together top data scientists, AI researchers, and industry leaders from the private and public sectors
Attendees#
Featured software#
Resources from this event#
Max Kuhn - Measuring LLM Effectiveness
For information on upcoming conferences, visit https://www.dataconf.ai .
Measuring LLM Effectiveness by Max Kuhn
Abstract: How can we quantify how accurately LLMs perform? In late 2024, Anthropic released a preprint of a manuscript about statistically analyzing model evaluations. The concepts are on target, but the statistical tactics have narrow applicability. A simpler statistical framework can be used to quantify LLM models that can be used in many more scenarios/experimental designs. We’ll describe these methods and show an example.
Bio: Max Kuhn is a software engineer at Posit PBC (nee RStudio). He is working on improving Rās modeling capabilities and maintaining about 30 packages, including caret. He was a Senior Director of Nonclinical Statistics at Pfizer Global R&D in Connecticut. He has been applying models in the pharmaceutical and diagnostic industries for over 18 years. Max has a Ph.D. in Biostatistics. He and Kjell Johnson wrote the book Applied Predictive Modeling, which won the Ziegel award from the American Statistical Association, which recognizes the best book reviewed in Technometrics in 2015. He has co-written several other books: Feature Engineering and Selection, Tidy Models with R, and Applied Machine Learning for Tabular Data (in process).
Presented at The New York Data Science & AI Conference Presented by Lander Analytics (August 27, 2025)
Hosted by Lander Analytics
(https://www.landeranalytics.com )

Daniel Chen - LLMs, Chatbots, and Dashboards: Visualize Your Data with Natural Language
For information on upcoming conferences, visit https://www.dataconf.ai .
LLMs, Chatbots, and Dashboards: Visualize Your Data with Natural Language by Daniel Chen
Abstract: LLMs have a lot of hype around them these days. Let’s demystify how they work and see how we can put them in context for data science use. As data scientists, we want to make sure our results are inspectable, reliable, reproducible, and replicable. We already have many tools to help us in this front. However, LLMs provide a new challenge; we may not always be given the same results back from a query. This means trying to work out areas where LLMs excel in, and use those behaviors in our data science artifacts. This talk will introduce you to LLms, the Ellmer, and Chatlas packages for R and Python, and how they can be integrated into a Shiny to create an AI-powered dashboard. We’ll see how we can leverage the tasks LLMs are good at to better our data science products.
Presented at The New York Data Science & AI Conference Presented by Lander Analytics (August 26, 2025)
Hosted by Lander Analytics
(https://www.landeranalytics.com )