10.11 Meeting Videos

10.11.1 Cohort 1

Meeting chat log
00:22:21    Mei Ling Soh:   It's z <0 in the book, pg 405
00:26:39    Jon Harmon (jonthegeek):    For anyone who wants to watch that video after this: https://www.youtube.com/watch?v=CqOfi41LfDw
00:53:53    Federica Gazzelloni:    part2 of the video: https://www.youtube.com/watch?v=IN2XmBhILt4&list=PLblh5JKOoLUIxGDQs4LFFD--41Vzf-ME1&index=4
00:57:40    Mei Ling Soh:   Thanks!
Meeting chat log
00:09:09    Jon Harmon (jonthegeek):    https://www.statlearning.com/resources-second-edition
00:41:00    Jon Harmon (jonthegeek):    (dataloader, dataset or list) A dataloader created with torch::dataloader() used for training the model, or a dataset created with torch::dataset() or a list. Dataloaders and datasets must return list with at most 2 items. The first item will be used as input for the module and the second will be used as target for the loss function.

10.11.2 Cohort 2

Meeting chat log
00:28:37    Jim Gruman: https://playground.tensorflow.org/
00:29:06    Federica Gazzelloni:    Thanks Jim
Meeting chat log
00:14:41    Ricardo Serrano:    Neural network course https://youtu.be/ob1yS9g-Zcs

10.11.3 Cohort 3

Meeting chat log
00:10:25    Fariborz Soroush:   https://rfordatascience.slack.com/archives/C02CQ93F882/p1647029662583599
00:10:42    Fariborz Soroush:   https://www.statlearning.com/resources-second-edition
00:11:16    Fariborz Soroush:   https://hastie.su.domains/ISLR2/Labs/Rmarkdown_Notebooks/Ch10-deeplearning-lab-torch.html

10.11.4 Cohort 4

Meeting chat log
00:25:11    kevin_kent: For dropout I believe the nodes themselves are dropped
00:26:50    Ron:    They mention that lasso can be used too
00:35:16    Ron:    like that animation!
00:35:38    Sandra Muroy:   yes! very cool!
00:42:54    Ron:    It's exercise 4 I was thinking of ;)
00:42:56    Ron:    Sorry
00:47:43    Ron:    max/ avg pooling == downsampling  for sure
00:50:41    Ron:    https://medium.com/@bdhuma/which-pooling-method-is-better-maxpooling-vs-minpooling-vs-average-pooling-95fb03f45a9
01:03:15    Ron:    can you link the book?
01:04:00    kevin_kent: https://course.fast.ai/Resources/book.html
01:05:54    Ron:    Deep learning libraries are python focused so I think it is easier (less impedance mismatch) to use python
01:08:18    kevin_kent: Jeremy Howard has really worked hard at that course and book the teaching approach and has consulted materials about the science of learning. I find his stuff really inspirational
01:11:47    Ron:    https://www.manning.com/books/deep-learning-and-the-game-of-go
01:12:10    Sandra Muroy:   cool
Meeting chat log
00:47:28    kevin_kent: The term “gradient boosting” comes from the idea of “boosting” or improving a single weak model by combining it with a number of other weak models in order to generate a collectively strong model. Gradient boosting is an extension of boosting where the process of additively generating weak models is formalized as a gradient descent algorithm over an objective function. Gradient boosting sets targeted outcomes for the next model in an effort to minimize errors. Targeted outcomes for each case are based on the gradient of the error (hence the name gradient boosting) with respect to the prediction.

GBDTs iteratively train an ensemble of shallow decision trees, with each iteration using the error residuals of the previous model to fit the next model. The final prediction is a weighted sum of all of the tree predictions. Random forest “bagging” minimizes the variance and overfitting, while GBDT “boosting” minimizes the bias and underfitting.

XGBoost is a scalable and highly accurate implementatio
00:53:53    Ron:    I have some reading to do ^^^ thanks!
01:09:13    shamsuddeen:    I need to hop off now. See you all next week.
01:14:03    Ron:    I am going to hop off as well, see you next time!

10.11.5 Cohort 5

Meeting chat log
00:12:08    Federica Gazzelloni:    start
00:15:39    Federica Gazzelloni:    https://github.com/tristanoprofetto/neural-networks/blob/main/ANN/Regressor/feedforward.R
00:33:45    Federica Gazzelloni:    https://www.youtube.com/watch?v=CqOfi41LfDw
01:09:11    Federica Gazzelloni:    end
Meeting chat log
00:07:17    Lucio Cornejo:  Hello, everyone
00:07:40    Derek Sollberger (he/him):  Good afternoon
00:09:11    Federica Gazzelloni:    start
00:26:20    Federica Gazzelloni:    https://hastie.su.domains/ISLR2/Labs/Rmarkdown_Notebooks/Ch10-deeplearning-lab-torch.html
00:27:28    Federica Gazzelloni:    https://hastie.su.domains/ISLR2/Labs/Rmarkdown_Notebooks/Ch10-deeplearning-lab-keras.html
00:44:30    Federica Gazzelloni:    (book source (10.20) on page 428)
00:44:47    Derek Sollberger (he/him):  The keras workflow and pipes are interesting
00:53:54    Lucio Cornejo:  no questions from me
00:57:52    Federica Gazzelloni:    end