[Math_club] Some upcoming talks: 5/7 and 5/13
Jason Murphy
jamu at uoregon.edu
Mon Apr 27 09:51:10 PDT 2026
Hello math club!
I want to advertise two upcoming talks. I apologize if you have already received advertisements for these, but we want to be sure that interested undergraduates are aware of these cool talks that are being organized for their benefit!
---
First, Hannah Larson from UC Berkeley will be giving the following talk (invited through the AWM group):
Undergraduate Talk (May 7th from 4-5pm in Tykeson 260)
Title: Lines in Algebraic Geometry
Abstract: Suppose you write down a general polynomial in x, y, z and consider the surface of all points where it vanishes. What can you say about the family of lines contained in this surface? Are there no lines, a finite number of lines, infinitely many? We'll derive an expected dimension for the family of lines depending on the degree of the polynomial (and generalize this to more variables). In the case of cubic surfaces, we'll address some more subtle questions regarding the geometry of lines over the real numbers. This story also motivates some of my joint work with Isabel Vogt, about a closely related problem concerning bitangents (lines that are tangent twice) to a plane quartic.
---
Second, Tyler Jarvis from BYU will give the following lecture (part of this year's Niven Lectures):
Undergraduate Lecture : “Aliasing in linear regression: new insights into a fundamental tool”, 4pm, Wednesday, May 13 in 105 Fenton Hall
Abstract : Have you ever thought about why car wheels in movies sometimes seem to rotate backwards? This is a phenomenon called aliasing, and it occurs in many settings, including in a fundamental tool of science, statistics, and machine learning called linear regression, where we want to find a function to fit data. Traditional methods of trying to understand and control the error in linear regression (and in other statistical models) rely on something called the bias-variance tradeoff, which, unfortunately, does not do a good job of explaining the really large models we see in modern machine learning and AI, and sometimes fails even on the smaller models. The missing ingredient for better understanding these models is aliasing. I’ll talk about aliasing and how accounting for it leads to powerful new ways to analyze these models and control their error.
---
Go to these talks! They were organized for you!
Best,
Jason
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.uoregon.edu/pipermail/math_club/attachments/20260427/be3e393f/attachment.html>
More information about the Math_club
mailing list