00:14:53 jonathan.bratt: Sorry, didn’t realize I was unmuted. :)
00:15:03 Raymond Balise: be right back… I am listening
00:15:16 August: time series can also be analysed with features, which means you can use decision trees, and not rely on the sequential indexing.
00:17:08 August: for those wanting to understand when we aren't modelling on features https://otexts.com/fpp3/tscv.html
00:17:18 August: application in modeltime https://cran.r-project.org/web/packages/modeltime.resample/vignettes/getting-started.html
00:25:53 Jon Harmon (jonthegeek): 10-fold CV is orange, LOOCV is black dashed, true is blue.
00:27:24 Mei Ling Soh: How do we decide on the k-value?
00:27:47 August: Its kind of a choice.
00:29:05 jonathan.bratt: because we have five fingers on each hand :)
00:29:39 Jon Harmon (jonthegeek): rsample::vfold_cv defaults to 10 so I generally do 10 🙃
00:29:47 August: the answer is in section 7.10.1 of ESL
00:29:50 Mei Ling Soh: 😅
00:30:04 August: By answer I mean explination
00:31:09 August: With time series I tend to use about 5 or 6 k when using models which utilise sequential indexing.
00:32:10 jonathan.bratt: Can you use the arrow keys?
00:38:11 August: This is quite interesting: https://machinelearningmastery.com/how-to-configure-k-fold-cross-validation/
00:39:53 Mei Ling Soh: Thanks, August!
Meeting chat log
00:11:39 Jon Harmon (jonthegeek): https://twitter.com/whyRconf
00:14:38 SriRam: go for one page view, not scrolling view
00:25:46 Wayne Defreitas: lol
00:33:53 SriRam: it should be square root of mean, not mean of square root ? Or did i read it wrong?
00:45:45 Wayne Defreitas: This was great thank you
00:46:27 Mei Ling Soh: Great! Thanks
00:49:19 jonathan.bratt: Ch6 is long
5.16.2 Cohort 2
Meeting chat log
00:29:18 Ricardo Serrano: https://www.statology.org/assumptions-of-logistic-regression/
00:29:26 Ricardo Serrano: https://towardsdatascience.com/assumptions-of-logistic-regression-clearly-explained-44d85a22b290
01:06:18 Jim Gruman: thank you Federica!!! more horsepower 🐎
01:07:47 Jim Gruman: I need to jump off. Talk to you all next week. Ciao
Meeting chat log
00:18:27 Federica Gazzelloni: Hello Jenny!
00:18:44 jlsmith3: Good morning!
00:39:57 Ricardo Serrano: References for bias/variance https://www.bmc.com/blogs/bias-variance-machine-learning/
00:40:26 Ricardo Serrano: https://youtu.be/EuBBz3bI-aA
00:40:36 Federica Gazzelloni: Thanks Ricardo!
00:57:55 Jim Gruman: 🐎 thank you everybody!
5.16.3 Cohort 3
Meeting chat log
00:07:07 Nilay Yönet: https://emilhvitfeldt.github.io/ISLR-tidymodels-labs/resampling-methods.html
00:33:41 Mei Ling Soh: Maybe we can wrap up the lab soon?
00:37:48 Fariborz Soroush: 👍
01:04:24 Mei Ling Soh: Two more minutes to go
01:05:19 Nilay Yönet: https://onmee.github.io/assets/docs/ISLR/Resampling-Methods.pdf
01:05:22 Nilay Yönet: https://waxworksmath.com/Authors/G_M/James/WWW/chapter_5.html
5.16.4 Cohort 4
Meeting chat log
00:37:41 Ronald Legere: https://en.wikipedia.org/wiki/Bootstrapping_(statistics)
00:54:52 shamsuddeen: Hello, I wil leave now. See you next week !
00:55:04 Sandra Muroy: bye Sham!
00:55:13 shamsuddeen: Thank you Sandra
Meeting chat log
00:38:01 Ronald Legere: https://stats.stackexchange.com/
00:03:04 Ángel Féliz Ferreras: Exercises
https://angelfelizr.github.io/ISL-Solution-Book/05-execises.html
00:03:46 Derek Sollberger: I love this first exercise; I added it to my textbook :-)
00:18:03 Derek Sollberger: Did you find any advantages/convenience to using the tidymodels workflows here?
00:36:34 Ángel Féliz Ferreras: https://emilhvitfeldt.github.io/ISLR-tidymodels-labs/06-regularization.html