Abhishek Thakur
Title: An Introduction To Transfer Learning In NLP and HuggingFace
Abstract: In this talk, I’ll start by introducing the recent breakthroughs in NLP that resulted from the combination of Transfer Learning schemes and Transformer architectures. The second part of the talk will be dedicated to an introduction of the open-source tools released by HuggingFace, in particular our Transformers, Tokenizers and Datasets libraries and our models.
Bio: Thomas Wolf is co-founder and Chief Science Officer of HuggingFace. His team is on a mission to catalyze and democratize NLP research. Prior to HuggingFace, Thomas gained a Ph.D. in physics, and later a law degree. He worked as a physics researcher and a European Patent Attorney.
https://thomwolf.io/
About HuggingFace: HuggingFace is doing open-research and open-source in the field of NLP, creating popular open-source platforms for NLP developers and researchers to use, build, and study state-of-the-art natural language processing technologies including text classification, information extraction, summarization, text generation, and conversational artificial intelligence.
https://huggingface.co/
#NLP #DeepLearning #HuggingFace
Please subscribe and like the video to help me keep motivated to make awesome videos like this one. 🙂
To buy my book, Approaching (Almost) Any Machine Learning problem, please visit: https://bit.ly/buyaaml
Follow me on:
Twitter: https://twitter.com/abhi1thakur
LinkedIn: https://www.linkedin.com/in/abhi1thakur/
Kaggle: https://kaggle.com/abhishek
Instagram: https://instagram.com/abhi4ml
Interesting links from the Talk:
Slides: https://docs.google.com/presentation/d/1GTRsJEgdjpd05Al1FEcqkymv-9sgiZIGc4melCA1Mmw/edit?usp=sharing
Documentation of transformers: https://huggingface.co/transformers/index.html
Model Hub: https://huggingface.co/models
List of models per language: https://huggingface.co/languages
Blog and demos: https://huggingface.co/blog
Forum: https://discuss.huggingface.co/
Hello,
Abhishek I found your channel today. This channel is like a gold mine for DS learning. If don't bother I have a suggestion: speedruns in old Kaggle Competitions. WIth could be fun and so powerfull for learning. Watching the workflow of a GM is so interesting like you did in "Pair Programming: Deep Learning Model For Drug Classification". I know that a competition is difficult to make a speedrun because the limited time., but could be interesting see what you could do in 1~2 hours of limited time and full focus. Thank you!
Would be curious if you've used GANs to produce synthetic data! If so show us how 😁
sir can you interview some of the nlp enginner who works professionally ?
RAG!